By Jon Cartwright
Reports of rumour-mongering, pettiness and mud-slinging may still be rife, but I think it’s safe to say that the fever surrounding the US primaries has at least partly subsided. Among those who have not been taking time to convalesce, however, are the folks at ScienceDebate 2008. According to an email they dropped into my inbox this morning, they’ve been busy working with a dozen national science organizations to prepare a list of 14 questions related to science policy for the presidential candidates. Get ready, they’ll be announcing it shortly.
Until then, check out this page on the Scientists and Engineers for America (SEA) website. Together with ScienceDebate 2008, the American Association for the Advancement of Science, the American Institute of Physics, the American Physical Society and 11 other organizations, the SEA has drawn up a list of seven questions on science policy for the 2008 congressional candidates.
Two candidates have already posted some responses. If you want to pester your local candidate, SEA gives you the option to send him or her an email.
By Jon Cartwright
Several of you have asked when I’m going to give you an update on Yoshiaki Arata’s cold-fusion demonstration that took place at Osaka University, Japan, three weeks ago. I have not yet come across any other first-hand accounts, and the videos, which I believe were taken by people at the university, have still not surfaced.
However, you may have noticed that Jed Rothwell of the LENR library website has put some figures with explanations relating to Arata’s recent work online. I’ve decided to go over them and some others here briefly to give you an idea of how Arata’s cold-fusion experiments work. It’s a bit more technical than usual, so get your thinking hats on.
Above is a diagram of his apparatus. It comprises a stainless-steel cell (2) containing a sample, which for the case of the demonstration was palladium dispersed in zirconium oxide (ZrO2–Pd). Arata measures the temperature of the sample (Tin) using a thermocouple mounted through its centre, and the temperature of the cell wall (Ts) using a thermocouple attached on the outside.
Let’s have a look at how these two temperatures, Tin and Ts, change over the course of Arata’s experiments. The first graph below is one of the control experiments (performed in July last year) in which hydrogen, rather than deuterium, is injected into the cell via the controller- (8) operated valve (5):
At 50 minutes — after the cell has been baked and cooled to remove gas contamination — Arata begins injecting hydrogen into the cell. This generates heat, which Arata says is due to a chemical reaction, and the temperature of the sample, Tin (green line), rises to 61 °C. After 15 minutes the sample can apparently take no more hydrogen, and the sample temperature begins to drop.
Now let’s look at the next graph below, which is essentially the same experiment but with deuterium gas (performed in October last year):
As before, Arata injects the gas after 50 minutes, although it takes a little longer for the sample to become saturated, around 18 minutes. This time the sample temperature Tin (red line) rises to 71 °C.
At a quick glance the temperatures in both graphs, after saturation, appear to peter out as one would expect if heat escapes to the environment. However, in the case of deuterium there is always a significant temperature difference between Tin and Ts, indicating that the sample and cell are not reaching equilibrium. Moreover, after 300 minutes the Tin of the deuterium experiment is about 28 °C (4 °C warmer than ambient), while Tin/Ts of the hydrogen experiment is at about 25 °C (1 °C warmer than ambient).
These results imply there must be a source of heat from inside the cell. Arata claims that, given the large amount of power involved, this must be some form of fusion — what he prefers to call “solid fusion”. This can be described, he says, by the following equation:
D + D = 4He + heat
(According to this equation, there should be no neutrons produced as by-products — thanks to those of you who pointed this out on the last post.)
If any of you are still reading, this graph below is also worth a look:
Here, Arata also displays data from deuterium and hydrogen experiments, but starts recording temperatures after the previous graphs finished, at 300 minutes. There are four plots: (A) is a deuterium and ZrO2–Pd experiment, like the one just described; (B) is another deuterium experiment, this time with a different sample; (C) is a control experiment with hydrogen, again similar to the one just described; and (D) is ambient temperature.
You can see here that the hydrogen experiment (C) reaches ambient temperature quite soon, after around 500 minutes. However, both the deuterium experiments remain 1 °C or more than ambient for at least 3000 minutes while still exhibiting the temperature difference between the sample and the cell, Tin and Ts.
Could this apparently lasting power output be used as a energy source? Arata believes it is potentially more important to us than hot or “thermonuclear” fusion and notes that, unlike the latter, it does not emit any pollution at all.
By Jon Cartwright
Most of you will never have raised an arm at Christie’s auction house. But, if you’re partial to the odd extravagance, there’s a first edition of Nicolaus Copernicus’s De Revolutionibus Orbium Coelestium (“On the Revolutions of Celestial Spheres”) up for grabs. It’ll probably cost you around a million dollars.
Bidding for the 1543 volume starts on 17 June, and I expect it will end up in the vault of some blasé collector. No-one will ever read it, but then it is in Latin, and who understands that these days? Nil desperandum, though, that’s what I like to say.
Still, I know of least one physicist who would love to get his hands on it. Owen Gingerich, a historian of astronomy from Harvard University, has spent years tracing copies of Copernicus’s masterpiece, partly as an exercise for a book he wrote in 2004. A first edition would be the darling possession on his mantelpiece. “There aren’t that many copies in private hands these days,” he lamented on the phone to me a few moments ago.
Nowadays Gingerich finds solace in a second-edition. Although considerably less valuable, it does have annotations by Rheticus, the young mathematician who persuaded Copernicus to publish his radical ideas. Gingerich did get the opportunity a few years ago to buy a bona-fide first edition for $50,000, which would have been a good investment but which unfortunately would have required him to re-mortgage his house.
Will Gingerich put in a bid at Christie’s this time round? “I figure that even if I had it I’d have to rent a bank safety deposit box to keep it in,” he says. “So I’ll give it a pass.”
By Jon Cartwright
Here’s a statistic for you, taken from a website called Sense About Science. It claims that there’s over a million scientific papers published every year. If that’s right, there must be something in the region of 20,000 published a week. Even if physics accounts for only a small fraction of the sciences, that still means we’re looking at several hundred every day. (I could dig out a reliable figure, but it’s probably not far wrong.)
There’s no way we at Physics World can even hope of keeping you up to date with that many papers. Nor would you want us to — let’s face it, most papers only deal with very minor developments that would only interest those working in exactly the same field.
So, I would like to raise a question: should we bother to comb the journals for the biggest developments, or should we give up reporting research news altogether?
Actually, I’m not the first to raise it. I discovered the idea nestled at the bottom of an article written last week in the Guardian by Peter Wilby. He had been haranguing the Daily Mail for the way they report “breakthrough” developments in health research. (It’s the same old score: this week they tell you a glass of wine a day will increase your life expectancy; next week they will tell you the opposite.) Wilby proposes that, instead of mindlessly regurgitating seesawing opinions from the medical community, the media should offer “state of knowledge” features that give an overview of where the present scientific consensus is headed.
Would this feature-only approach benefit physics too? Conclusions seen in physics papers are usually straightforward to interpret — particularly compared with, say, the vagaries of health surveys — which would suggest the answer is no. However, there are still many difficulties.
One is that small developments in research are seen as being less newsworthy than those that go against prevailing opinion. In the latter instance, unless there is enough context to show how the research fits into the grand scheme of things, a news story can be misleading. Another, as I showed in my recent article on the use of embargoes in science publishing, is that much (if not most) science news is artificially timed to fit in with publishers’ agendas; in a sense, the news is not “new” at all. A feature-only approach would avoid both of these.
The main point I can see in favour of science news is that there are certain developments that deserve to be brought to people’s attention immediately. Think of the recent claims by the DAMA experiment team in Italy that they had detected dark matter on Earth. Or the discovery by Japanese physicists of a new class of high-temperature superconductors based on iron. Should we only report on such critical research? If so, where do we draw the line?
Let’s hear your thoughts. But bear in mind that if we do decide to scrap science news, I’ll be out of a job.
By Jon Cartwright
We have Leon Lederman to blame. For the “God particle”, that is. Since he published his 1993 book, The God Particle: If the Universe Is the Answer, What Is the Question?, the layperson might be forgiven for believing the Large Hadron Collider (LHC) is not searching for a particle called the Higgs boson, but a path to spiritual enlightenment.
Many physicists hate referring to Him. For some particle physicists, the “God particle” belittles the hoards of other theoretical particles that might be detected at the LHC. They say it reveals little of the particle’s function, and is savoured by writers with little rhetoric. For some non-particle physicists, the God particle epitomizes the hype that surrounds particle physics. Then there are those who think divine connotations are always a bad idea.
Are they, though? When a furore about the use of “God particle” began bouncing around the blogsphere last August, mostly in response to an article written by Dennis Overbye of the New York Times in which he defended the term, several agreed that religious metaphors should be an acceptable part of our language. Einstein used them all the time (e.g. “Quantum mechanics…yields much, but it hardly brings us close to the secrets of the Old One”) yet historians tend to conclude he was not a theist. Even when I began writing this blog entry I thought I might be clever and refer to the Higgs as the light at the end of the LHC’s tunnel — before I reminded myself that the Higgs is not the only particle of import they expect to find.
As Sean Carroll noted on the Cosmic Variance blog, it is a fear of pandering to the religious right that is driving the expulsion of religious metaphors. If certain atheists succeed, religious metaphors will go the way of the dodo. The God particle is not one of the best, but it might be one of the last.
Which brings me to the point of this entry (not that Earth-shattering, I’ll warn you now). This morning I was looking at the news links posted on the Interactions website, only to find one from the Guardian newspaper headlined “The hunt for God’s particle“. That’s right — you read it right the first time. “God’s particle”? Where’s the metaphor in that? Have we now come full-circle, having compared the search for the Higgs boson to the path for spiritual enlightenment, only to reduce it to another of God’s creations?
Poor old Lederman must wonder what he started.
There’s been another development in the nascent field of iron-based high-temperature superconductors, which were recently shown to be able to turn superconducting at the very respectable temperature of 55 K.
Scientists at the National Institute of Standards and Technology (NIST) in the US have used neutron beams to investigate the magnetic properties of the iron-based materials. They found that, at low temperatures and when undoped, the materials make a transition into an antiferromagnetic state in which magnetic layers are interspersed with non-magnetic layers. But when the materials are doped with fluorine to make them into high-temperature superconductors, this magnetic ordering is suppressed.
This is reminiscent of the behaviour of cuprates — the highest temperature superconductors known to-date. Is this more than a coincidence? We’ll have to wait and see.
The research is published online here in Nature.
Robert Aymar, the director-general of CERN, has said that the Large Hadron Collider (LHC) — the world’s biggest particle physics experiment — will be in “working order” by the end of June, according to the French news agency Agence France-Presse (AFP).
It is not clear what Aymar means by this, given that the last announcement from CERN was for a July start-up. It seems unlikely that LHC has raced ahead of schedule, so it might be that he thinks the cooling of the magnets will be complete by the end of June. However, the status report on the LHC website would indicate otherwise.
I spoke to a press officer at CERN, and she said that the AFP journalists quoted Aymar from a recent meeting they had at the European lab. She said that, as far as she is aware, the beam commissioning is still set to take place in July.
I have not yet spoken to James Gillies, the chief spokesperson for CERN, because he is tied up in meetings all day. When he gets back to me, I will give you an update.
UPDATE 3.15pm: I have just spoken to Gillies and he said that there is no change to the start-up schedule — the plan is still to begin injecting beams towards the end of July. Aymar was indeed referring to the cooling of the magnets, which should be complete by the end of June. Four of the eight sectors have already been cooled to their operating temperature of 1.9 K; the last (sector 4–5) began the cooling process today.
The reason for the gap between the cooling and beam-injection is that there must be a series of electrical tests, which will take around four weeks.
On 23 March 1989 Martin Fleischmann of the University of Southampton, UK, and Stanley Pons of the University of Utah, US, announced that they had observed controlled nuclear fusion in a glass jar at room temperature, and — for around a month — the world was under the impression that the world’s energy woes had been remedied. But, even as other groups claimed to repeat the pair’s results, sceptical reports began trickle in. An editorial in Nature predicted cold fusion to be unfounded. And a US Department of Energy (DOE) report judged that the experiments did “not provide convincing evidence that useful sources of energy will result from cold fusion.”
This hasn’t prevented a handful of scientists persevering with cold-fusion research. They stand on the sidelines, diligently getting on with their experiments and, every so often, they wave their arms frantically when they think have made some progress.
Nobody notices, though. Why? These days the mainstream science media wouldn’t touch cold-fusion experiments with a barge pole. They have learnt their lesson from 1989, and now treat “cold fusion” as a byword for bad science. Most scientists* agree, and some even go so far as to brand cold fusion a “pathological science” — science that is plagued by falsehood but practiced nonetheless.
[*CORRECTION 29/05/08: It has been brought to my attention that part of this last sentence appears to be unsubstantiated. After searching through past articles I have to admit that, despite it being written frequently, I can find no factual basis that “most scientists” think cold fusion is bad science (although public scepticism is evidently rife). However, there have been surveys to suggest that scientific opinion is more likely divided. According to a 2004 report by the DOE, which you can read here, ten out of 18 scientists thought that the hitherto results of cold-fusion experiments warranted further investigation.]
There is a reasonable chance that the naysayers are (to some extent) right and that cold fusion experiments in their current form will not amount to anything. But it’s too easy to be drawn in by the crowd and overlook a genuine breakthrough, which is why I’d like to let you know that one of the handful of diligent cold-fusion practitioners has started waving his arms again. His name is Yoshiaki Arata, a retired (now emeritus) physics professor at Osaka University, Japan. Yesterday, Arata performed a demonstration at Osaka of one his cold-fusion experiments.
Although I couldn’t attend the demonstration (it was in Japanese, anyway), I know that it was based on reports published here and here. Essentially Arata, together with his co-researcher Yue-Chang Zhang, uses pressure to force deuterium (D) gas into an evacuated cell containing a sample of palladium dispersed in zirconium oxide (ZrO2–Pd). He claims the deuterium is absorbed by the sample in large amounts — producing what he calls dense or “pynco” deuterium — so that the deuterium nuclei become close enough together to fuse.
So, did this method work yesterday? Here’s an email I received from Akito Takahashi, a colleague of Arata’s, this morning:
“Arata’s demonstration…was successfully done. There came about 60 people from universities and companies in Japan and few foreign people. Six major newspapers and two TV [stations] (Asahi, Nikkei, Mainichi, NHK, et al.) were there…Demonstrated live data looked just similar to the data they reported in [the] papers…This showed the method highly reproducible. Arata’s lecture and Q&A were also attractive and active.”
I also received a detailed account from Jed Rothwell, who is editor of the US site LENR (Low Energy Nuclear Reactions) and who has long thought that cold-fusion research shows promise. He said that, after Arata had started the injection of gas, the temperature rose to about 70 °C, which according to Arata was due to both chemical and nuclear reactions. When the gas was shut off, the temperature in the centre of the cell remained significantly warmer than the cell wall for 50 hours. This, according to Arata, was due solely to nuclear fusion.
Rothwell also pointed out that Arata performed three other control experiments: hydrogen with the ZrO2–Pd sample (no lasting heat); deuterium with no ZrO2–Pd sample (no heating at all); and hydrogen with no ZrO2–Pd sample (again, no heating). Nevertheless, Rothwell added that Arata neglected to mention certain details, such as the method of calibration. “His lecture was very difficult to follow, even for native speakers, so I may have overlooked something,” he wrote.
It will be interesting to see what other scientists think of Arata’s demonstration. Last week I got in touch with Augustin McEvoy, a retired condensed-matter physicist who has studied Arata’s previous cold-fusion experiments in detail. He said that he has found “no conclusive evidence of excess heat” before, though he would like to know how this demonstration turned out.
I will update you if and when I get any more information about the demonstration (apparently there might be some videos circulating soon). For now, though, you can form your own opinions about the reliability of cold fusion.
You might recall a while back physicsworld.com reported on a prediction for peculiar event that takes place on the two equinoxes. On the 20 March and the 22 September (or thereabouts) at two places on the Earth’s surface, many of the gravitational forces in the Milky Way should cancel out.
Such a quiet time in the turmoil of our galaxy provides an ideal opportunity for a ruthless test of Newton’s laws of motion. Some physicists think that if there were any deviation in the laws at very low accelerations it would mean dark matter — the elusive substance thought to make up around 95% of the universe’s mass and the dream catch of experiments worldwide — does not exist. Instead, all the phenomena associated with dark matter could be explained by a slight alteration in the laws known as modified Newtonian dynamics (MOND).
When Alex Ignatiev from the Theoretical Physics Research Institute in Melbourne, Australia, came up with the idea for the equinoctial experiment, there were a couple of problems with his proposal. First, there was a worry that stray icebergs at high latitudes where one of the experiments would have to be performed might give a false gravitational signal. Second, Ignatiev did not know the exact time that the desired signal would occur.
Now, in a new paper, he has resolved both of these. He has shown that even the biggest icebergs would not produce a signal big enough to confuse the data. And he has also shown how to predict the exact signal times.
One of the referees for Ignatiev’s paper has given a rich endorsement to the proposal: “MOND is the leading alternative to cosmic dark matter. It has passed a surprising number of astronomical tests and is desperately in need of laboratory tests. The author’s idea for testing MOND in a terrestrial setting is the only viable suggestion I’ve ever heard for such a possibility. This is an incredibly important problem, and deserves to be explored just as much as CDMS and the many other dark matter search experiments.”
Astrophysicists have a better idea of how dust obscures the light from galaxies, according to a paper published in Astrophysical Journal Letters.
It is already well known that dust, which permeates all galaxies, attenuates the light reaching Earth from the cosmos. It absorbs light of most wavelengths and then re-emits it as a blanket of infrared radiation. Now, Simon Driver of St Andrews University in the UK and colleagues have produced the first model that accounts for this absorption.
One of the model’s implications — that dust absorbs just under half the radiation produced by stars — will not be a surprise to astronomers. They already know this, having compared the average magnitude of the infrared radiation in the sky with the magnitude of the radiation from pinpoint sources like stars and galaxies. But what might be of interest is that Driver and colleagues can show how the dust affects the light output of galaxies depending on their orientation.
I spoke with Alastair Edge of Durham University, who is familiar with Driver’s team’s work, and he was pleased that that the researchers have managed to model the dust successfully. He followed up our conversation with an email: “The authors have made an important link between the observed properties of the galaxies we see from the light coming directly from their stars to the amount of long wavelength radiation we see coming from the dust within the galaxies. Obtaining a match between the energy absorbed and that re-radiated allows us to understand the global properties of galaxies in a more holistic fashion.”