Skip to content

New Technology Sprung Up from Carnegie Mellon University

Uprises in technology is what separates Carnegie Mellon University from its competition. These inventions make Browsing the Internet Easier, Allow 3-D Printing to be more Personal, Replace Cooking Efforts with Robots, Eliminate First Responder Experts with 'Snakes,' and Transform 'Dumb Walls' into 'Smart Walls.' Click each image below to be directed to Carnegie Mellon University for full article descriptions.

Bento Browser Makes It Easier To Search On Mobile Devices  

A new web browser developed at Carnegie Mellon University now brings order to complex searches in a way not possible with conventional tabbed browsing. The Bento browser, inspired by compartmentalized bento lunch boxes popular in Japan, stores each search session as a project workspace that keeps track of the most interesting or relevant parts of visited web pages.  

The projects that are stored for later use, can be handed off to others, or can be moved to different devices. Someone planning a trip to Alaska with a conventional browser, for instance, might create multiple tabs for each location or point of interest, as well as additional tabs for hotels, restaurants and activities. With Bento, users can identify pages they found useful, trash unhelpful pages and keep track of what they have read on each page. Bento also bundles the search result pages into task cards, such as accommodations, day trips, transportation, etc.   Kittur’s research team will present a report on their mobile web browser at CHI 2018, the Conference on Human Factors in Computing Systems, April 21-26 in Montreal, Canada.   Mobile devices now initiate more web searches than do desktop computers. Yet the limitations of conventional browsers become more acute on mobile devices. Not only is screen size limited, but mobile users are more often interrupted and distracted and have more difficulty saving and organizing information, said Nathan Hahn, a Ph.D. student in HCII.  Bento Browser is now a search app for iPhones, but its capabilities for organizing searches and helping people resume searches also could benefit people using desktop computers.    

 Cheap 3-D Printer Can Produce Self-Folding Materials  

Researchers at Carnegie Mellon University have used an inexpensive 3-D printer to produce flat plastic items that, when heated, fold themselves into predetermined shapes, such as a rose, a boat or even a bunny.  Lining Yao, assistant professor in the Human-Computer Interaction Institute and director of the Morphing Matter Lab, said these self-folding plastic objects represent a first step toward products such as flat-pack furniture that assume their final shapes with the help of a heat gun. Emergency shelters also might be shipped flat and fold into shape under the warmth of the sun.  Self-folding materials are quicker and cheaper to produce than solid 3-D objects, making it possible to replace noncritical parts or produce prototypes using structures that approximate the solid objects. Yao will present her group’s research on this method, which she calls Thermorph, at CHI 2018, the Conference on Human Factors in Computing Systems, April 21-26 in Montreal, Canada.    Yao and her research team were able to create self-folding structure by using the least expensive type of 3-D printer — an FDM printer — and by taking advantage of warpage, a common problem with these printers. FDM printers work by laying down a continuous filament of melted thermoplastic. These materials contain residual stress and, as the material cools and the stress is relieved, the thermoplastic tends to contract. This can result in warped edges and surfaces. To create self-folding objects, she and her team precisely control this process by varying the speed at which thermoplastic material is deposited and by combining warp-prone materials with rubber-like materials that resist contracture.  The objects emerge from the 3-D printer as flat, hard plastic. When the plastic is placed in water hot enough to turn it soft and rubbery — but not hot enough to melt it — the folding process is triggered. Though they used a 3-D printer with standard hardware, the researchers replaced the machine’s open source software with their own code that automatically calculates the print speed and patterns necessary to achieve particular folding angles.Though these early examples are at a desktop scale, making larger self-folding objects appears feasible.  “We believe the general algorithm and existing material systems should enable us to eventually make large, strong self-folding objects, such as chairs, boats or even satellites,” said Jianzhe Gu, HCII research intern.    

Sony and Carnegie Mellon University Sign Research Agreement on Artificial Intelligence and Robotics  

Initial Efforts to Focus on Cooking and Delivery  

The School of Computer Science has entered into an agreement with Sony Corporation through its US subsidiary, Sony Corporation of America, to collaborate on artificial intelligence (AI) and robotics research, the company announced today.   Initial research and development efforts will focus on optimizing food preparation, cooking and delivery. This area of research and development was selected because the technology necessary for a robot to handle the complex and varied task of food preparation and delivery could be applied to a broader set of skills and industries. Applications could include those where machines must handle fragile and irregularly shaped materials and carry out complex household and small business tasks. Additionally, robots that are developed for food preparation and delivery would have to be able to operate in small areas, an ability which could be valuable for many other industries.   For this project, researchers will focus on defining the domain of food ordering, preparation, and delivery. Initially, they will build upon existing manipulation robots and mobile robots, and will plan on developing new domain-specific robots for predefined food preparation items and for mobility in a limited confined space.   Depending on the needs of the consumer, food offerings and preparation methods could be adjusted based on personal dietary restrictions and the availability of certain ingredients. Food could be delivered to the home or office, and dining tables could be set elegantly prior to food being served.    “This project has the potential to make the vast possibilities of AI and robotics more familiar and accessible to the general public,” Kitano said. “Additionally, it could also assist those for whom daily tasks, such as food preparation, are challenging. I am very excited to be working with the talented scientists at CMU to make this vision a reality.”      

Snakebot Named Ground Rescue Robot of the Year

The Robotics Institute’s multi-jointed Snakebot robot, which searched for earthquake survivors in Mexico City last fall, has been named Ground Rescue Robot of the Year by the Center for Robot-Assisted Search and Rescue (CRASAR). Howie Choset, professor of robotics, and systems scientist Matt Travers have been studying potential use of a snake-like robot for disaster search-and-rescue for years in CMU’s Biorobotics Lab. The robot can propel itself into the smallest of spaces, allowing rescuers to search for signs of life where dogs and people cannot reach, CRASAR noted in its award announcement.  Travers led a small team to Mexico City last fall to search for survivors with Snakebot – its first use during the response phase of an actual disaster. The robot discovered no survivors in the collapsed apartment building where it was deployed.   “Rescue decisions and critical infrastructure decisions during that response phase are made very rapidly based on the best available information at the time and these robots, well-deployed with the right teams of operators and experts, are getting key information to decision makers so they can save lives and efficiently manage risk,” said Robin Murphy, director of CRASAR and a professor of computer science and engineering at Texas A&M University.   

Paint Job Transforms Walls Into Sensors, Interactive Surfaces 

Walls are what they are — big, dull dividers. With a few applications of conductive paint and some electronics, however, walls can become smart infrastructure that sense human touch, and detect things like gestures and when appliances are used.  Researchers at Carnegie Mellon University and Disney Research found that they could transform dumb walls into smart walls at relatively low cost — about $20 per square meter —using simple tools and techniques, such as a paint roller.  These new capabilities might enable users to place or move light switches or other controls anywhere on a wall that’s most convenient, or to control videogames by using gestures. By monitoring activity in the room, this system could adjust light levels when a TV is turned on or alert a user in another location when a laundry machine or electric kettle turns off.  The researchers found that they could use conductive paint to create electrodes across the surface of a wall, enabling it to act both as a touchpad to track users’ touch and an electromagnetic sensor to detect and track electrical devices and appliances.  Using painter’s tape, they found they could create a cross-hatched pattern on a wall to create a grid of diamonds, which testing showed was the most effective electrode pattern. After applying two coats of conductive paint with a roller, they removed the tape and connected the electrodes.  

The electrode wall can operate in two modes — capacitive sensing and electromagnetic (EM) sensing. In capacitive sensing, the wall functions like any other capacitive touchpad: when a person touches the wall, the touch distorts the wall’s electrostatic field at that point. In EM sensing mode, the electrode can detect the distinctive electromagnetic signatures of electrical or electronic devices, enabling the system to identify the devices and their locations. Similarly, if a person is wearing a device that emits an EM signature, the system can track the location of that person, Zhang said.  Yang Zhang, a Ph.D. student in the HCII, will present a research paper on this sensing approach, called Wall++, at CHI 2018, the Conference on Human Factors in Computing Systems, April 21–26 in Montreal.