In 2001, the MIT Technology Review published a special edition of Technology Review: 10 Emerging Technologies That Will Change the World. You can read the original publication here. These are my notes. Some additional takeaways at the end.
Since 2001, MIT Technology Review has done yearly Top 10 Emerging Technologies issues. Notes for the following years will follow. I also plan on separate posts for additional thoughts, updates since issue publication (on both the highlighted technologies and the mentioned researchers), and related topics (ie new breakthroughs, interesting startups, related research) for each publication.
1. Brain-Machine Interfaces
- Miguel Nicolelis is a leader in a competitive and highly significant field, in which there are only about half-a-dozen teams pursuing the same goals: gaining a better understanding of how the mind works and using that knowledge to build implant systems that would make brain control of computers and other machines possible
- Nicolelis, working at MIT’s Laboratory for Human and Machine Haptics scored an important first on the HBMI (hybrid brain-machine interface, a term Nicolelis coined) front: sending signals from individual neurons in a monkey to a robot, which used the data to mimic the monkey’s arm movements in real time
- Monkey has sockets installed into top of skull that allow measurement of electrical signals from 90 neurons (4 separate areas of her cerebral cortex)
- In the long-term, HBMI’s will allow human brains to control artificial devices designed to restore lost sensory and motor functions ie do for the brain what the pacemaker did for the heart
- Implants will help shed light on some of the brain’s mysteries: neuroscientists still know very little about how the electrical and chemical signals emitted by the brain’s millions of neurons let humans perceive color or smell, give rise to the precise movements of Brazilian soccer players
- “We don’t have a finished model of how the brain words. All we have are first impressions”
- Nicolelis’ latest experiments show that by tapping into multiple neurons in different parts of the brain, it is possible to glean enough info to get a general idea of what the brain is up to
- For the monkey, it’s enough info to detect the monkey’s intention of making a specific movement a few tenths of a second before it actually happens
- Nicolelis’ team succeeded at reliably measuring tens of neurons simultaneously over several moths –previously a key technological barrier –that enabled the remarkable demonstration with robot arm
- Remaining challenges: developing electrode devices and surgical methods that will allow safe, long-term recording of neuronal activities
- Nicolelis is working on developing a telemetry chip that would collect and transmit data through the skull, without unwieldy sockets and cables
- Nicolelis, working at MIT’s Laboratory for Human and Machine Haptics scored an important first on the HBMI (hybrid brain-machine interface, a term Nicolelis coined) front: sending signals from individual neurons in a monkey to a robot, which used the data to mimic the monkey’s arm movements in real time

2. Flexible Transistors
- Implementations of pervasive computing will require integrated circuits that are both cheap and flexible (tough for today’s silicon technology”
- Scientists working on transistors based on organic (carbon-based) molecules or polymers (organic electronics are inexpensive to manufacture and compatible with plastic substrates); however, organics are far slower than silicon cousins
- Breakthrough: Cherie Kagan made a compromise: transistors made from materials that combine the charge-shuttling power and speed of inorganics with the affordability and flexibility of organics
- Hybrids may be far faster than amorphous silicon, and have a key advantage over silicon-based electronics: it can be dissolved and printed onto paper or plastic like particles of ink
- Kagan’s transistors could compete with organic electronics in variety of applications like radio-frequency product ID tags, flat-panel video displays (sharper images for a fraction of the cost) that lead to affordable wall-sized displays or high-quality displays that pop out of your pen (if all goes well, could be used in cheap, flexible displays within 5 years)
- Bright displays that fit in your pocket will require portable power: Kagan’s newest interest: cheap, flexible materials for solar cells to liberate pervasive computing from bulky batteries

3. Data Mining
- Data mining, also known as knowledge discovery in databases (KDD): a system that can burrow through gigabytes of website visitor logs in search of patterns no one can anticipate in advance to compile a simple recommendation list rather than sorting through a few megabytes of structured data to find answers to specific queries
- Usama Fayyad, the pioneer behind data mining, was working at GM compiling a huge database on car repairs that would allow any GM service technician to ask the database questions based on several car characteristics and get a response
- Fayyad developed a pattern recognition algorithm to solve this, which was later used at NASA JPL to identify objects, and pursued by everyone from the military to doctors
- Fayyad identified a need: companies needed someone to host their databases for them, and provide data-mining services on top, which led to him creating digiMine
- Future: wide open as researchers move beyond original focus on highly structured, relational databases
- Hot area: text data mining: extracting unexpected relationships from huge collections of free-form text documents.
- UCB LINDI system has been used to help geneticists search biomedical literature and produce plausible hypotheses for the function of newly-discovered genes
- Hot area: video mining: combining speech recognition, image understanding, and natural language processing to open up the world’s vast video archives to efficient computer searching
- CMU Informedia II system is given CNN clips, it produces a computer searchable index by automatically dividing each clip into individual scenes accompanied by scripts and headlines
- Hot area: text data mining: extracting unexpected relationships from huge collections of free-form text documents.

4. Digital Rights Management
- Ranjit Singh, president of ContentGuard, spinoff of Xerox PARC, is on a mission to commercialize content protection in a wired world
- Sits at ground zero of what may be bloodiest battle to shape the Internet during the 21st century’s initial decade: IP owners vs internet users (who want content to be freely distributed)
- The internet allows perfect and totally frictionless distribution
- Digital rights management (DRM) is the catalyst for a revolution in e-content, which will allow content owners to get much wider and deeper distribution than ever before, and see who is passing your content to whom
- At its core, DRM amounts to an encryption scheme with a built-in e-business cash register, where content is encoded, and to get the key, a user needs to do something (ie pay money, provide an email address, etc). DRM providers deliver protection tools, whereas the content owners set the conditions
- ContentGuard uses a multiple key approach; anyone receiving bootleg content would have to crack into all over again, so even if a hacker cracks a piece of content, he can’t distribute it
- DRM isn’t ubiquitous for 2 reasons
- Content owners are in the midst of a hard rethink about both pricing and distribution: how do you price 3 listens to a song, or a download of a low-res image that can’t be forwarded to others? They are currently trying out different models for valuing content
- The user experience has to hide the complexity of protection technologies. Users have to be able to buy and consume content without jumping through hoops
- Analysts don’t believe content can be protected in the Internet era; people want flexible access to content (re: Napster).
- Napster is unstoppable, and even if courts stop it, the Internet’s enablement of frictionless distribution of digital content among millions will live on
- The more content a business puts online, the faster it will want to put still more content up, because it will see the economic benefits and users will see the benefits of gaining access to more content, leading to a huge explosion (Network Effects)

5. Biometrics
- Biometrics: identifying individuals by specific biological traits, has already emerged
- Large companies use fingerprint sensors, facial recognition, iris-scanning
- Consumers have been reluctant to adopt
- Joseph Atick, President/CEO of Visionics (facial recognition), believes that the wireless Web will make consumer hungry for biometrics. PDAs and cell phones are becoming portals to users’ worlds, transaction devices, IDs, and maybe one day passports
- With so much personal/financial information in one place, comes great need for security, which will drive biometrics. Security will drive need for biometric systems, other tech developments (increased bandwidth, camera phones, etc) will create infrastructure needed to put biometrics into consumer hands
- Visionics is working to let people authenticate any transaction they make over the wireless Web using their own faces
- Atick, while heading a lab at Rockefeller University, discovered that the brain deals with visual info much as computer algorithms compress files: because everyone has 2 eyes, a nose, lips, the brain extracts only those features that typically show deviations from the norm (ie bridge of nose, upper cheekbones), filling in the rest
- Visionics develops FaceIt, which verifies a person’s ID based on a set of 14 facial features unique to an individual and unaffected by presence of/changes in facial hair/expression
- Successfully used to fight crime in England and election fraud in Mexico
- Signed merger agreement with Digital Biometrics to build the first line of “biometric network appliances” –computers hooked to the Net with capacity to store/search large databases of facial/biometric info. Appliances with customer ID data can receive queries from companies wanting to authenticate e-transactions. Accessing the system works with PDAs/desktops, but most will come from handheld devices
- Also working with companies in Japan/Europe so consumers can capture their own faces and submit encrypted versions over the Net
- Future: bringing back an old element of human commerce –restoring confidence that comes with doing business face to face
- It will be 2-3 years before PDA and cell phone wielders will use biometrics instead of passwords and PINs

6. Natural Language Processing
- New generation of interfaces arising that will allow extended conversation with computers; requires integration of speech recognition, natural language understanding, discourse analysis, world knowledge, reasoning ability, speech generation
- DARPA working on interfaces that will ultimately include pointing, gesturing, and other forms of visual communication
- IBM/Microsoft want a speech enabled “intelligent environment” where every object big enough to hold a chip actually has one; speech recognition necessary because they will each be too small to have a keyboard
- Karen Jensen, chief of NLP at MSFT Research previously at IBM and contributed to MSFT’s Encarta encyclopedia and grammar checker, is now focused on MindNet, a system for automatically extracting a massively hyperlinked web of concepts from something like a standard dictionary
- Let’s say a dictionary defines a motorist as “a person who drives a car”
- MindNet uses automatic parsing tech to find definition’s underlying logical structure, identifying “motorist” as a kind of person, “drives” as a verb taking motorist as a subject and car as an object
- Wants a conceptual networking tying together all of human understanding in words; show how 2 sentences said differently can mean the same thing
- MindNet has proved to be great for translation –have 2 separate conceptual networks for English and a second language, then align the webs so English logical forms match other language equivalents –then annotate matched logical forms with data from English-other language translation so translation proceeds in either direction

7. Microphotonics
- Photonic crystals are on the cutting edge of microphotonics: tech for directing light on a microscopic scale that will make a major impact on telecommunications
- Goal: replace electronic switches with faster, miniature optical devices
- None have the technical elegance and widespread utility of photonic crystals
- Photonic crystals provide means to create optical circuits and other small, inexpensive, low-power devices that can carry, route, process data at the speed of light
- Trend: make light do as many things as possible, won’t completely replace electronics though
- Photonic crystals are to photons what semiconductors are to electrons: offering an excellent medium for controlling the flow of light
- Crystals admit or reflect specific photons depending on wavelength and crystal design
- MIT Prof John Joannopoulos suggested that defects in crystals’ regular structure could provide an effective and efficient method to trap the light or route it through the crystal
- Mold the flow of light by confining light and figuring out different ways to make light bend, go straight, split, come back together in the smallest possible space
- Breakthrough: Explained how crystal filters could pick out specific streams of light from the flood of beams in wavelength division multiplexing (WDM), tech used to increase amount of data carried per fiber
- Helped set the stage for the world’s smallest laser and electromagnetic cavity, key components in building integrated optical circuits
- Even with an all-optical Internet, other problems loom:
- Advancements due to improving fibers and tricks like WDM, but in 5-10 years, experts fear it won’t be possible to squeeze any more data into fiber optics
- “Perfect mirror” photonic crystals may be the solution: reflect specific wavelengths of light from every angle with extraordinary efficiency. Hollow fibers with this reflector could carry up to 1000x more data than current fiber optics. It doesn’t absorb/scatter light like glass, so it could also eliminate expensive signal amplifiers needed every 60-80km for today’s optical networks
- Advancements due to improving fibers and tricks like WDM, but in 5-10 years, experts fear it won’t be possible to squeeze any more data into fiber optics
- What are the theoretical limits of photonic crystals?? How much smaller can they be made? How can they be integrated into optical chips for telecom/computers?
- Once you start being able to play with light, a whole new world opens up

8. Untangling Code
- Gregor Kiczales, scientist at Xerox PARC, champions “aspect-oriented programming,” a technique that will allow software writers to debug code as easily as it is used by laymen
- “Crosscutting” refers to capabilities, like logging and security and synchronization, that are the same kind of shortcuts those in other professions have been using for a while
- Logging: ability to trace and record every operation an application performs; only works if people remember to follow it
- Security and Synchronization: ability to make sure that 2 users don’t try to access the same data at the same time; requires programmers to write the same functionality into many different areas of the application
- Keeping track of crosscutting concerns is error-prone; forget to upgrade just a few instances, and bugs start to pile up
- Kiczales proposes a new category in a language called an “aspect,” which allows programmers to write, view, and edit a crosscutting concern as a separate entity
- Meaning less buggy upgrades, shorter product cycles, better/cheaper software
- Many firms already have a version, but Kiczales is the first to take it to the real world by incorporating it into a new extension of Java
- Northeastern: adaptive programming; IBM: subjective programming; University of Twente: composition filtering; Elsewhere: multidimensional separation of concerns
- Major changes in programming methodology can take 30 years for acceptance, aspect could cut that cycle down by 15-20 years.
- Crosscutting concerns aren’t actually hard to work with, once you have the proper programming support

9. Robot Design
- Big obstacle: expensive to design and make robots smart enough to adapt readily to different tasks and physical environments the way human being do. How do builders build more complexity into robots without custom-tailoring each one?
- Robots stuck in commercial niche doing simple, repetitive jobs (ie assembly line, mass production of toys, etc)
- Promising approach: automate the design and manufacture of robotics by deploying computers to conceive, test, and even build the configurations of each robotic system
- Jordan Pollack of Brandeis, directed a computer to design a moving creature using a limited set of simple plastic parts: plastic rods, ball joints, small motors, and a “brain” (neural network)
- The computer –using an algorithm inspired by biological evolution –evolved hundreds of generations of potential designs, killing off the sluggish and refining the strong; bringing to life the strongest with a rapid-prototyping machine
- Important point of coevolutionary design and automated manufacturing for robotics is to get small-quantity production to be economical (expects first cheap industrial robots to be 5-10 years away)
- Jordan Pollack of Brandeis, directed a computer to design a moving creature using a limited set of simple plastic parts: plastic rods, ball joints, small motors, and a “brain” (neural network)
- Before robots reach out into everyday world of businesses and households, they need their own version of Moore’s Law: becoming dramatically more affordable and powerful over time
- Designing even relatively simple robots is a painstaking task: Honda has spent 14+ years building a humanoid robot able to walk, open doors, navigate stairs

10. Microfluidics
- Microfluidics: a promising new branch of biotechnology with the idea that once you master fluids at the microscale, you can automate key experiments for genomics and pharmaceutical development, perform instant diagnosis tests, even build implantable drug-delivery devices –all on mass-produced chips
- Microfluidics will do for biotech what the transistor did for electronics
- Problems: developing general tech that can be used for a broad range of applications with several functions to be integrated into a single chip. Manufacturing, particularly silicon micromanaging, is so expensive that experts question if the products using these techniques can ever be economical to manufacture
- Stephen Quake’s group (at CalTech), unveiled a set of microfabricated valves and pumps –a critical first step in developing tech general enough to work for any microfluidics application
- To make microfluidics cheaper, Quake is casting them out of soft silicone rubber in reusable molds (“soft lithography”)
- Potential for mass-produced, disposable microfluidic chips that make possible everything from drug discovery on a massive scale to at-home tests for common infections
- First was a microscale DNA analyzer that operates faster and on different principles than conventional full-sized version, then a miniature cell sorter, and most recently, those valves and pumps
- Stephen Quake’s group (at CalTech), unveiled a set of microfabricated valves and pumps –a critical first step in developing tech general enough to work for any microfluidics application
- Quake finished bachelors and masters at Stanford in physics in 4 years, got bored, and started focusing on “the physics of biology” and was hired at CalTech as the first interdisciplinary Professor, and gained tenure at just 31 years old
- Quake founded a startup called Mycometrix, which has licensed all of Quakes microfluidics patents from Caltech, and is planning to deliver its first microfluidic devices to researchers soon (HP and Motorola are trying as well, but only Mycometrix has actually brought a product to market)
- Quake more interested in basic biology questions: How do the proteins that control gene expression work? How can you do studies that cut across the entire genome?
- Now that Quake has some neat tools, he’s looking to do some science with them
- Quake is the prototypical innovator: he has ability to work in all areas, from basic research to hot commercial markets.

Some Additional Thoughts:
- It’s been almost 19 years since this publication came out: it’s interesting to see that many of the scientists predictions in terms of time have been off. It seems like they ran into science vs engineering problems (separate post on this later).
- Researchers back then said breakthroughs were 5-10 years away. One has to wonder which breakthroughs scientists are saying are right around the corner are actually right around the corner, and which are still far off dreams (re:autonomous driving).
- Many of the technologies highlighted are being highlighted today; Not sure if they went through a winter and are currently going through a resurgence, or if they have been hot his entire time. Are technology hype cycles 10 years? 20 years?
- Good mix of university researchers and startups. Today hot startups/companies working on these technologies include:
- Brain-Machine Interfaces: Neuralink, CTRL-Labs
- Biometrics: Amazon, Apple, Face++, SenseTime, Alibaba
- Natural Language Processing: IBM, Bytedance, Facebook
- Do researchers make good entrepreneurs? At the very least, most of the startups mentioned here seem to be out of business or now part of other companies. But is this due to natural life cycle of tech companies or researcher competency?
- What kind of breakthroughs need to happen to actually make technologies reality? We know neural networks in their current form (backwards propogation) have been around since the 1980s. Advanced sensors and chips are what led to the massive collection of data and explosion in computing power that has driven the AI boom of the last decade (Thanks Nvidia, Thanks Google).
- What are the key drivers of a technology?
- When will the catalyst(s) that propel them forward occur?
- Who is leading the charge for each field?
- Why do these technologies even matter? Do they even matter?
- Where will these breakthroughs happen? Where geographically? Which disciplines? Cross border? Interdisciplinary?
- How will these innovations change the way we live? Will they help us thrive, or just survive?