New Light-based Microprocessor Upto 50 Times Faster Than Conventional Chips

My favorite technology news blog has posted another post!

Technology

Researchers at the University of Colorado Boulder, University of California Berkeley along with Massachusetts Institute of Technology, have developed a groundbreaking microprocessor chip that uses light, as an alternative to electricity, to transfer data at very high speeds while consuming relatively very minute amounts of energy.

Conventional microprocessor chips—the ones present in every thing from digital watches, laptops, microwave ovens to supercomputers etc use electrical circuits to communicate with each other and transfer data.  In recent times, however, the sheer amount of electricity wanted to power the ever-growing pace and volume of this data transfers has proven to be a limiting factor.

To overcome this impediment, the researchers turned to photonics, or light-based, technology.  Sending data using light rather than electrical energy reduces a microchip’s power burden because light could be sent across longer distances utilizing the same amount of energy.

light based microprocessor chip
The light-enabled microprocessor, a 6 x 3 mm chip, installed on a circuit board. Picture by Glenn Asakawa – Source: CU-Boulder

This cutting-edge processor is far more superior than standard electrical chips, having the ability to process three hundred gigabits per second per square millimeter (300 Gbps/mm2). In comparison with a typical processor, that is 10 to 50 times more data.

The new chip measures 6 by 3 millimeters, It retains state-of-the-art traditional electronic circuitry. It features 2 processor cores with 70 million transistors. Along with that, is has 850 optical input/output components, that are used to send and receive light.

There are numerous advantages of a chip that uses light as an alternate to electricity.
  • First, they can send data in multiple parallel data streams encoded on different colours of light over one and the same medium.
  • The second benefit is that these chips can use infrared light, whose physical wavelength is shorter than one micron. To place that into perspective, that’s about one hundredth (1/100th) of the width of a human hair, which implies that much more components could be fitted onto a single chip.
  • Another advantage of these light-based chips is that they’re much more energy-efficient, thus they are more environment friendly and can result in huge cost savings for data centers in their power bills.

This is a milestone. It is the first processor that can use light to communicate with the external world,” stated Vladimir Stojanović, an associate professor of electrical engineering and computer sciences at UC-Berkeley who led the collaborative workforce in this research. “No other processor has photonic I/O in the chip.”

By combining the optical and the electrical circuitry on a single chip, the researchers anticipate that the new technology could be integrated into present manufacturing processes easily and scaled up for industrial production with minimal disruption.

We discovered the way to reuse the same materials and processing steps that comprise the electronic circuits to construct high-performance optical devices in the same chip,” stated Mark Wade, a Ph.D. candidate at CU-Boulder and a co-lead author of the research study. “This enables us to design complex electronic-photonic systems that can solve the communication bottlenecks in computing.”

The new processor is much more superior than most electric processors that are presently available, however its potential has not been fully realized yet.

Optibit (Ayar Labs) wins MIT Clean Energy
OptiBits (now Ayar Labs) won the $200,000 MIT Clean Energy prize in 2015 – Image Source: Eversource

The research has resulted in two startups, one of which is the Ayar Labs (formerly known as OptiBit) which focuses on energy-efficient, excessive-volume data transfers.  The firm was founded by researchers from CU-Boulder, the UC Berkeley, and MIT.  Under its former name, they won the ‘MIT Clean Energy Prize’ sponsored by the US Department of Energy, in May, 2015.

Details of the new technology, which may pave the way for faster, extra powerful computing systems and network infrastructure, were published on 24th December in the journal Nature.

 

The post New Light-based Microprocessor Upto 50 Times Faster Than Conventional Chips appeared first on GadgTecs.

from GadgTecs http://ift.tt/1RLY1vD
via IFTTT – GadgTecs is the best science, technology blog http://ift.tt/1NR0yyI

FAA Approved: The Flying Car TF-X’s model

My favorite technology news blog has posted another post!

Technology

 

Why drive a car, autonomous or otherwise, if you presumably can fly one  That seems to be the question of the day at the Federal Aviation Administration, the place where test flights in U.S. airspace have recently been approved, just at the beginning of this month. The celebrating party is none other than Terrafugia, a company that focuses on airborne automobiles, or better generally known as ‘flying cars’. According to the company, its TF-X flying car will soon be whizzing around skies in northeast America for the purposes of further development and research. “It’s a significant milestone in the development of the program and we’re really excited to be moving forward,” Terrafugia spokesperson Dagny Dukach informed us.

Sadly 😥 , you will not be able to leap in and take one of these vehicles to the skies anytime soon — the prototypes which have been cleared for flight are merely mini versions of the actual cars, or you can say, mini flying cars. Coming in at just two feet long and with a weight limit of 25 kg, it will nonetheless be some time before we’re in Jetsons territory. Nonetheless, this new development marks a huge step forward for development of the technology, as Dukach stated: “The FAA exemption will allow Terrafugia to test the hovering capabilities of a 1/10th scale TF-X automobile and gather flight characteristics data that may drive future design decisions.

terrafugia liftoff
The TF-X in action – Artist concept – Source: Terrafugia

Terrafugia has been flirting with the idea of flying cars for the last ten years or so, and their concept for the TF-X will feature semi-autonomous flight, another feat, which implies that you will need much less training to fly this car than you’d to operate, say, a real plane. Nevertheless there are still quite a few issues that need to be worked out, along with how the automobile would be powered. At present, the company plans for the TF-X to operate as a plug-in hybrid-electrical, however exactly how this may come to fruition has yet to be determined.

RelatedUpgraded – The flying car called the Terrafugia TF-X

If (and hopefully this time it’s not a big if) and once we do finally see the TF-X flying in real life, it will fly at a maximum speed of 320 km/h and will have a maximum flying range of 800 kilometers. And without any runway area needed for take off or landing, you may literally just soar into the skies out of your driveway.

Terrafugia tfx flying car - Artist concept
The flying car in action – Artist concept – Source: Terrafugia

We are very excited as you also might be for self-driving vehicles to hit the consumer market, however we would prefer and are more excited about the flying car to become a reality.

 

 

Video:

http://ift.tt/1NQaRmw

The post FAA Approved: The Flying Car TF-X’s model appeared first on GadgTecs.

from GadgTecs http://ift.tt/1Sglqoc
via IFTTT – GadgTecs is the best science, technology blog http://ift.tt/1Ifukgr

AI: Robots that can learn by viewing how-to videos

My favorite technology news blog has posted another post!

Technology

 

Scanning several videos on the same how-to matter, a computer inside a robot finds instructions they’ve in common and combines them into one step-by-step series.

Robot learns from how to videos
Source: RoboWatch

If you hire new employees you may sit them down to look at an instructional video on how you can do the job. What would happens if you bought a new robot?

Cornell researchers are teaching robots to watch educational videos and derive a sequence of step-by-step directions to carry out an activity. You won’t even have to turn on the DVD player; the robot can lookup what it wants on YouTube. The work is geared towards a future when we might have “personal robots” to carry out everyday tasts – feeding the cat, washing dishes, cooking, doing the laundry – in addition to helping the aged and other people with disabilities.

The researchers named their project ”RoboWatch.” A part of what makes it possible is that there’s a common underlying structure to most how-to movies. And, there is plenty of source materials out there. YouTube has more than 180,000 clips on “How to make an omelet” and 809,000 on “ how to tie a tie.” By scanning a number of videos on the same activity, a computer can discover what all of them have in common and reduce that to easy step-by-step directions in natural language.

Why do people publish all these videos? “Maybe to assist individuals or perhaps simply to show off,” stated graduate student Ozan Sener, lead author of a paper on the video parsing methodology presented on the 16th of December at the International Conference on Computer Vision in Santiago, Chile.  Sener collaborated with colleagues at Stanford University, where he’s presently a visiting researcher.

A key feature of their system, Sener identified, is that it’s “unsupervised.” In most previous work, robot learning is achieved by having a human explain what the robot is observing – for instance, teaching a robot to recognize objects by displaying it photos of the objects while a human labels them by name. Here, a robot with a job to do can lookup the directions and figure them out for itself.

Faced with an unfamiliar task, the robot’s computer mind begins by sending a question to YouTube to find a collection of how-to videos on the subject. The algorithm includes routines to omit “outliers” – videos that match the keywords but aren’t instructional; a question about cooking, for instance, may bring up clips from the animated feature Ratatoullie, advertisements for cooking utensils or some old Three Stooges routines.

The computer scans the clips frame by frame, searching for objects that appear often, and reads the accompanying narration – utilizing subtitles – looking for frequently repeated phrases. Using these markers it matches similar segments in the numerous videos and orders them into a single sequence. From the subtitles of that sequence it will produce written directions. In other research, robots have learned to carry out duties by listening to verbal directions from a human. In the future, data from other sources such as Wikipedia may be added.

The learned knowledge from the YouTube videos is made accessible through RoboBrain, an online knowledge base robots anyplace can consult to help them do their jobs.

The research is supported in part by the Office of Naval Research and a Google Research Award.

// <![CDATA[
amzn_assoc_placement = "adunit0"; amzn_assoc_tracking_id = "gadg06a-20"; amzn_assoc_ad_mode = "manual"; amzn_assoc_ad_type = "smart"; amzn_assoc_marketplace = "amazon"; amzn_assoc_region = "US"; amzn_assoc_linkid = "dbbd50dc8bf58fafb30a9c3aba88232b"; amzn_assoc_asins = "B00TY40EKO,B00KMSOIGM,B00VFZDOC0,B00CAWP9YI"; amzn_assoc_search_bar = "false"; amzn_assoc_title = "Check out these robots for sale";
// ]]>
//z-na.amazon-adsystem.com/widgets/onejs?MarketPlace=US

The post AI: Robots that can learn by viewing how-to videos appeared first on GadgTecs.

from GadgTecs http://ift.tt/1Rye7c7
via IFTTT – GadgTecs is the best science, technology blog http://ift.tt/eA8V8J

The Modular ‘Fairphone 2’ is available now – Is it any good?

Two years after Fairphone launched its debut gadget, the firm is back with an altogether more interesting follow-up. Fairphone’s first handset was released in 2013, and was intended to be a more equitable device — one which used fairly traded supplies and put a premium on better working conditions for those that assembled the handset. Despite coming from an unknown Dutch startup, 60,000 individuals went on to purchase the handset. Because of the firm’s startup status, Fairphone’s first handset was based on a reference design from its initial manufacturing partner. The design of the Fairphone 2, however, was more under the company’s control, enabling it to bring in a new, modular design. By allowing customers to swap out elements of their smartphone — the screen or headphone jack, for instance — if they break or want an upgrade, Fairphone hopes that customers will have the ability to keep their smartphones for much longer than the normal two-year upgrade cycle with out feeling like they are suffering as a consequence. Result: fewer old handsets ending up in landfill. “We came up with the original concept of [a phone] that you might disassemble, maintain, restore and ultimately upgrade… We had a goal in mind: the basic idea was we wanted to have individuals use it for about 5 years. It sounds lengthy in this day and age, however when you go back about 5 years, that is the iPhone 4S, and there are still lots of people utilizing an iPhone 4S,” Olivier Hebert, the firm’s CTO, said Taking the modular concept from the drawing board to shop cabinets has been no straightforward task. “A normal cellphone is one monolithic block with every thing built-in and you’ve got a couple of exterior surfaces, and that is it. We now have seven different components [the modules that make up the phone], and each single one of them is a product in itself. All that complexity you need to create a cosmetically right product, you’ve that multiplied by seven.” What’s inside the Fairphone So what is the smartphone like to use? It is a tougher cellphone to evaluate than many of its contemporaries because it manages to be both an average mid-range Android, and somewhat special, at the same time. The Fairphone 2 opened up – Source: ZD Net At the basic level, it has a big, bright display; at 32GB it has a fairly large memory; and it has a solid processor, the Snapdragon 801. It is light to carry, however with a 5-inch display and a half-inch of bezel, it is chunky in the hand. That sense of chunkiness isn’t dispelled by the cellphone’s depth. That is partially due to the Fairphone 2’s built-in protective case that snaps on and off the handset — options include a rather smart semi-translucent darkish blue number. The cases have a rubber bumper that goes around the cellphone’s edges and onto the front, to guard the handset from the drops and falls that may end up in a smashed display screen — another way the company has provided you with something that can keep handsets from an […]

Source: The Modular ‘Fairphone 2’ is available now – Is it any good?

MIT’s latest microscope can view nanoscale processes in real time

My favorite technology news blog has posted another post!

Technology

 

State-of-the-art atomic force microscopes (AFMs) are designed to capture pictures of structures as small as a fraction of a nano meter — one million times smaller than the width of a human hair. In recent times, AFMs have produced desktop-worthy close-ups of atom-sized structures, from single strands of DNA to individual hydrogen bonds between molecules.

However scanning these pictures is a meticulous, time-consuming process. AFMs, due to this fact have been used mostly to picture static samples, as they’re too sluggish to capture active, changing environments.

MIT latest microscope
Source: MIT

Now engineers at Massachusetts Institute of Technology (MIT) have designed an atomic force microscope that scans images 2,000 times faster than present commercial models. With this new high-velocity instrument, the team produced photos of chemical processes happening at the nanoscale, at a rate that’s near to a real-time video.

In one demonstration of the instrument’s capabilities, the researchers scanned a 70- by-70-micron sample of calcite as it was first immersed in de-ionized water and later exposed to sulfuric acid. The group noticed the acid eating away at the calcite, expanding existing nanometer-sized pits within the material that quickly merged and led to a layer-by-layer removal of calcite along the material’s crystal pattern, over an interval of several seconds.

Kamal Youcef-Toumi, a mechanical engineering professor at the MIT, says the instrument’s sensitivity and speed will allow scientists to look at atomic-sized processes that play out as high-resolution “movies.”

“People can see, for instance, condensation, nucleation, dissolution, or deposition of material, and the way how these occur in real-time — things that people have never seen before,” Youcef-Toumi says. “This is incredible to see these details emerging. And it’ll open great opportunities to discover all of this world that’s in the nanoscale.”

The group’s design and pictures, that are based on the PhD work of Iman Soltani Bozchalooi, now a postdoc in the Department of Mechanical Engineering, are published in the journal Ultramicroscopy. Co-authors include former graduate scholar Andrew Careaga Houck and visiting scholar Jwaher AlGhamdi.

The big picture

Atomic force microscopes usually scan samples utilizing an ultrafine probe, or needle, that skims alongside the surface of a sample, tracing its topography, similarly to how a blind individual reads Braille. Samples sit on a movable platform, or scanner, that moves the sample laterally and vertically beneath the probe. Because AFMs scan extremely small structures, the instruments need to work slowly, line by line, to avoid any sudden movements that might alter the sample or blur the picture. Such conventional microscopes typically scan about 1 to 2 lines per second.

“If the sample is static, it is okay to take 8 to 10 minutes to get an image,” Youcef-Toumi says. “But if it is something that’s changing, then think about if you begin scanning from the top very slowly. By the time you get to the bottom, the sample has changed, and so the information in the picture is not right, because it has been stretched over time.”

To speed up the scanning process, scientists have tried constructing smaller, more nimble platforms that scan samples more quickly, albeit over a smaller area. Bozchalooi says that such scanners, whereas speedy, do not enable scientists to zoom out to see a wider view or examine bigger features.

“It’s like when you were landing someplace in the USA and don’t have any clue where you are landing, and are informed wherever you land, you are only allowed to look a few blocks around and up to a limited height,” Bozchalooi says. “There is no way you will get a bigger image.”

Scanning simultaneously

Bozchalooi came up with a design to allow high-speed scanning over both large and small ranges. The primary innovation centers on a multi-actuated scanner and its control: A sample platform incorporates a smaller, speedier scanner as well as a bigger, slower scanner for every direction, which work collectively as one system to scan a wide 3-D area at high speed.

Other attempts at multi-actuated scanners have been stymied, mostly because of the interactions between scanners: The movement of 1 scanner can have an effect on the precision and movement of the other. Researchers have also discovered that it is difficult to control each scanner individually and get them to work with every other component of a microscope. To scan each new sample, Bozchalooi says a scientist would need to make a number of tunings and adjustments to multiple components within the instrument.

To simplify the use of the multiactuated instrument, Bozchalooi developed control algorithms that take into consideration the effect of one scanner on the other.

“Our controller can move the little scanner in a manner that it does not excite the big scanner, because we know what sort of movement triggers this scanner, and vice versa,” Bozchalooi says. “In the end, they’re working in synchrony, so from the perspective of the scientist, this scanner looks like a single, high-speed, large-range scanner that doesn’t add any complexity to the operation of the instrument.”

After optimizing other components on the microscope, such as the instrumentation, optics, and data acquisition systems, the group discovered that the instrument was able to scan a sample of calcite forward and backward, with no damage to the probe or sample. The microscope scans a sample faster than 2,000 hertz, or 4,000 lines per second — two thousand times quicker than existing commercial AFMs. This translates to about 8 to 10 frames per second. Bozchalooi says the instrument has no limit on imaging range and for a maximum probe speed, can scan across hundreds of microns, as well as picture features which are several microns high.

“We want to go to real video, which is at least 30 frames per second,” Youcef-Toumi says. “Hopefully we can work on improving the instrument and controls so that we are able to do video-rate imaging whereas maintaining its large range and keeping it user-friendly. That will be something nice to see.”

See the video below, of the microscreleased by the researchers. It has no sound.

http://ift.tt/1m5EPeS

This research was supported, in part, by the Heart for Clean Water and Clean Energy at MIT and KFUPM, and by National Instruments.

 

amzn_assoc_placement = “adunit0”;
amzn_assoc_tracking_id = “gadg06a-20”;
amzn_assoc_ad_mode = “manual”;
amzn_assoc_ad_type = “smart”;
amzn_assoc_marketplace = “amazon”;
amzn_assoc_region = “US”;
amzn_assoc_linkid = “fab5255cb2425e4db8a2dd431b9b2800”;
amzn_assoc_asins = “B00AM5XB5O,B0094JTZOU,B00B1Z6EXU,B00LAX52IQ”;
amzn_assoc_search_bar = “false”;
amzn_assoc_title = “Microscopes”;

//z-na.amazon-adsystem.com/widgets/onejs?MarketPlace=US

 

The post MIT’s latest microscope can view nanoscale processes in real time appeared first on GadgTecs.

from GadgTecs http://ift.tt/225MZ8a
via IFTTT – GadgTecs is the best science, technology blog http://ift.tt/225N23M