One fairly obvious, little-discussed variety of low-hanging technological fruit are those ideas and devices that are already known – to but a few. Secrets.

Sometimes important secrets are deliberately kept for a long time. Consider the Chamberlen family.

Peter Chamberlen the elder [1560-1631] was the son of a Huguenot surgeon who had left France in 1576. He invented obstetric forceps , a surgical instrument similar to a pair of tongs, useful in extracting the baby in a difficult birth. He, his brother, and his brother’s descendants preserved and prospered from their private technology for 125 years. They went to a fair amount of effort to preserve the secret: the pregnant patient was blindfolded, and all others had to leave the room. The Chamberlens specialized in difficult births among the rich and famous.

More recently, in 1998, Yoel Fink et al published a method of constructing a multilayer thin-film dielectric mirror that is highly reflective over a broad range of wavelengths at all angles – something that the books said couldn’t be done. Joshua Winn was playing with a code modeling dielectric stacks and noticed that it seemed to reflecting over a wide range of angles – which fairly rapidly led to a theoretical explanation of why this would work. But if memory serves, after that method was published, it turned out that five different optics shops had been doing this for years – in Russia, England, and the US. They just never felt like publishing, and why should they have?

Or consider the history of the Fast Fourier Transform (FFT) , one of the most useful algorithms known to man. John Tukey had an idea for speeding up the calculation of the discrete Fourier transform: Dick Garwin saw that it was incredibly useful and pushed it, while Cooley wrote the program. But the idea goes back a bit further in time: Good published a related idea in 1958, L. H. Thomas in 1948, Danielson and Lanczos in 1942, Stumpff in 1939, J.D. Everett in 1860, A. Smith in 1846, and F. Carlini in 1828. The first treatment of the algorithm, and the only one as general as the Cooley-Tukey article, was by some geek named Carl Friedrich Gauss, back in 1805 (17 years before Fourier published his work) . An unpublished paper on this topic was included in Gauss’s complete works, published ( in Latin, of course) in 1866. If someone (a Jesuit?) had just looked in the right place and talked to Babbage, the British would have been doing digital signal processing in the Crimean War.

Sometimes the future is already here, but not generally known. Sometimes it’s hidden, sometimes it’s forgotten, sometimes it’s in Latin. There have to be better ways of finding and disseminating those secrets.

“There have to be better ways of finding and disseminating those secrets.”

Welcome to the internets brah.

Lovely article. The finding is certainly widespread. Every “finding” in psychology tends to be a re-hash of a previous finding. The “Flynn Effect” as Jim Flynn and others have noted, long precedes him. http://dx.doi.org.libproxy.ucl.ac.uk/10.1016/j.intell.2013.03.008,

Different minds bump into the same realities at different times, and in slightly different ways.

However, why should secrets be disseminated? Fermat was right not to be too specific about the simple proof of his last theorem. More fun that way: the joy of knowing what others do not know, or do not yet know. Precisely the sort of mindset which sometimes inspires the problems set for us at West Hunter.

Those who know strongly suspect that his ‘simple proof’ was wrong,

There’s the story about Ed Thorp discovering the Black-Scholes model a decade before Black and Scholes, and making a killing with it. He got the dough, they got the Nobel.

http://dealbreaker.com/2012/11/traders-copy-academics-who-copy-traders

Actually, working in the finance industry, Black-Scholes is something that was only used in the 80s and 90s, and the only result we got from it was the portfolio insurance crash in 1987 and LTCM in the late 90s. After 1987 sensible people already started to modify it to make it look similar to how they had priced options before the fancy theories came out…

Isn’t this related to the ‘thick problem’ idea you were talking about a little while back? Even if a Jesuit had seen Gauss’ note, how many would have appreciated it?

The easy way to solve this problem is just to copy a smart guy’s mind into a computer. Then crank the simulation speed x1000 and see how long it takes him to connect the dots across the internet. That shouldn’t be too much work right?

not impossible though and the most credible singularity scenario. just optimize what we already know

Fermat never mentioned the Last Theorem in any of his correspondence. There is only the marginal note in his copy of Bachet’s translation of Diophantus. Nearly all of the results stated by Fermat in his correspondence concerned quadratic forms together with a handful concerning cubic and quartic forms. He does state the case of the Last Theorem for n = 3 and n = 4. For n = 4 he actually gives the proof in his correspondence which is very unusual for Fermat. Presumably he thought it was so easy there was no point in keeping his proof secret. Otherwise he gives only cryptic hints at how he proved his results. He is however very careful to distinguish between results he claimed to have proven and other results he only conjectured. With the exception of a casual remark he made in response to an inquiry from Mersenne that all Fermat numbers were probably prime all his claims both those that he claimed to have proven and the those he conjectured have turned out to be true. It took hundreds of years for mathematicians like Euler, Lagrange, Legendre, Gauss etc. to establish all of his theorems and conjectures with Weber mopping up most of the remaining conjectures at the end of the nineteenth century.

Since Fermat never mentioned the Last Theorem in his correspondence his “proof” was most likely a mistake that he quickly realized was wrong before he had a chance to mention it to anybody.

Gauss did a lot of stuff which he never published. He basically had developed the theory of elliptic functions years before Abel and Jacobi.

Hermann Grassmann was a nineteenth century mathematician who published a huge amount of stuff on geometry which his contemporaries almost totally ignored. In the twentieth century his results were slowly rediscovered by others. I’ve heard that there were papers published on the theory of spinors in the 1930’s that contained nothing that was not already available in Grassmann’s work. Frustrated by the neglect of his mathematical work Grassmann became Germany’s leading Sanskrit scholar.

Speaking of low hanging fruit, in the early 1930’s Siegel was perusing unpublished manuscripts of Riemann and came across the Riemann-Siegel formula which he published. A result of Riemann found in the 1850’s when published in the 1930’s was one of the most significant results published at that time on the Riemann Zeta Function.

People not only have to notice, there has to be some critical mass of people who want to know the truth. Spitzka saw mental illnesses as brain diseases in the 19th C (nor was he the only one), but this got pushed out by Freud and his descendents, who preferred to think it was all about Sex. This was allowed to happen not because Sigmund was enormously persuasive, but because it was what European intellectuals, and then Americans, wanted to hear. What fun to think oneself advanced because sex dare be mentioned, while the others who disagreed could be dismissed as ignorant and pathological!

http://assistantvillageidiot.blogspot.com/2013/03/madness.html

There might still be gems in Newton’s notes. Worth a look, or already done to death?

Pingback: RNS Quote of the Day, The Past Is Not Fixed in Stone Edition | Random Nuclear Strikes

Sometimes you see this in private industry, and sometimes you see it in the government. A Dutch researcher, Wim van Eck, published a paper on intercepting signals radiated from CRTs http://cryptome.org/jya/emr.pdf, and it eventually turned out the US government had known about the phenomenon for years, and chose to keep it secret. http://www.nsa.gov/public_info/_files/cryptologic_spectrum/tempest.pdf

On a related but distinct subject, in IP law, the trade secret is well-established. You are entitled to protect a valuable secret as long as you take appropriate steps to keep it secret. You do always run the risk of some academic stumbling on your technique and publishing it, but the term of trade secret is unlimited. A patent on the other, is time limited, but the exclusivity of a patent is the bribe you get to disclose your secrets to the world. There is an incredible wealth of practical knowledge stored in patent filings, but it can be hard to sift through them to find something useful, even using the systems at the patent office in DC.

Since our culture is fanatically narcissistic/ Cultural Marxist, politically incorrect ideas constitute a source of “low hanging fruit.” They are not secrets, per se, but ideas rejected for political /cultural / religious reasons. The damage of a high carb diet was exposed in pre-war Germany. As cultural Marxists captured medical/nutritional institutions and foisted a high carb diet (along with a grain industry– it is illegal to advertise low carb in breadbasket Canada), the results in obesity and diabetes were catastrophic. Meat/animal products are not PC, so therefore they cannot be healthy. The vegan cultural marxist lifestyle was foisted on the population, consequences be damned.

Yeah, the vegan high carb dier. Oh, God…

This is a strong argument for either watertight patent laws, which I don’t think is possible, or a significant part of gdp going to technology prizes.

I have read that 3rd parties get 3 times as much value from inventions as the inventors do which means prizes should be about 3 times the value obtained by patents. This is difficult to estimate but looks like at least 6% of gdp.

You may be interested in http://a-place-to-stand.blogspot.co.uk/2013/01/did-patents-create-step-change-in.html

Or consider the history of the Fast Fourier Transform (FFT) , one of the most useful algorithms known to man. …If someone (a Jesuit?) had just looked in the right place and talked to Babbage, the British would have been doing digital signal processing in the Crimean War.The FFT is just a special case of the DFT (Discrete Fourier Transform), using a divide-and-conquer algorithm to compute a DFT in O(n log n) rather than O(n^2) time. There is nothing that an FFT can do that a plain old DFT calculation cannot.

The FFT became very valuable in the age of fast computations. When speed was not a requirement, and engineers performed calculations by hand, the DFT sufficed. A Babbage engine, even if it worked in Babbage’s lifetime (which it did not), was hand-cranked and could not have operated fast enough for the FFT algorithm to make any difference over a DFT in that era.

Speed was always a requirement, you doofus. Why do you think people used logarithms, and, before that, prosthaphaeresis?

The real issue is the size of the data set, not the inherent speed of the computing process. Efficient algorithms like the FFT, whose computational requirements grow slowly with N, become relatively more valuable as N increases. I once replaced a N-cubed algorithm with one that took linear time, when N was about 10,000. Made a difference.

That said, an FFT can save time even on a fairly small data set. Gauss said ” truly, that method greatly reduces the tediousness of mechanical calculations, success will teach the one who tries it.”

You’re saying that computational efficiency is not valuable. Gauss thought otherwise. Who should I believe? I’m torn.

Your post was about low-hanging fruit, making the conjecture that if the FFT had been widely known at the time of Babbage, then signal processing might have flourished during the era of the “Crimean War”. My point was that the computers were not available during that time, and a hand-cranked device such as Babbage’s Engine would not have been fast enough to exploit the FFT.

The real issue is the size of the data set, not the inherent speed of the computing process.Putting aside that there’s always a time vs space trade off, in this case, the speed of the computation (the number of steps required to compute the answer) depends on the data size (the length of the input vector). Which is why, as you yourself have pointed out:

Efficient algorithms like the FFT, whose computational requirements grow slowly with N, become relatively more valuable as N increases.The reason why these algorithms become more valuable is because it requires relatively less time to compute as N increases, as you’ve indicated above.

I once replaced a N-cubed algorithm with one that took linear time, when N was about 10,000. Made a difference.Of course it did. I wonder what kind of programmer would use an O(n^3) algorithm when an O(n) algorithm was available? What did you do, replace a bubble sort with a hash look-up? Didn’t that save you computing time?

You’re saying that computational efficiency is not valuable.Nonsense. After all the years of numerical and scientific programming I’ve done, I would never make such a claim.

My point was that you could not really exploit the utility of the FFT at the time of Babbage, because the era was too primitive to (1) store input vectors of large N and (2) hand-crank the Babbage machine fast enough to make any appreciable difference in the calculation of a FFT over a DFT algorithm.

Gauss thought otherwise. Who should I believe? I’m torn.Not just Gauss. Donald Knuth (who invented the analysis of computational complexity) also though so, as does any undergraduate computer science student, including myself. Believe what you like.