First robot mobile goes on sale in Japan

The world’s first robotic mobile phone RoBoHon, a pocket-size walking and dancing robot, started sale on Thursday in Japan. The human-shaped smartphone, developed by Japanese electronics company, Sharp and engineer Tomotaka Takahashi, inventor of the first robot astronaut ‘Kirobo’, went on sale with a base price of 198,000 yen ($1,800), EFE news reported.

To mark the launch, the Osaka-based company opened the RoBoHon Cafe in Tokyo where visitors can test the robot until June 7. The Japanese electronics manufacturer is producing 5,000 units per month, aiming to be the leader of this type of mobile after sealing a takeover agreement with the Taiwanese company Hon Hai, also known as Foxconn, otherwise known for assembling iPhones and iPads for Apple.

Apart from being used as a mobile, the 19.5-cm tall humanoid robot weighing 390 grams could be used as projector to display video, photos or maps.  It also offers a wide range of applications based on conversation with the android. RoBoHon can also recognise faces of people using its front camera and then address them by their name.


NASA’s Valkyrie Humanoid Upgraded, Delivered to Robotics Labs in U.S. and Europe

It’s always exciting when a new robot arrives in your lab. Usually, the more expensive the robot is, the more exciting it is. With the possible exception ofBoston Dynamics’ ATLAS, NASA’s Valkyrie has got to be one of the most expensive humanoid robots ever made, and last year, NASA promised to give away (or, at least, lend) three of them to universities in the hope that Valkyrie will learn some new skills.

Within the last few weeks, the University of Massachusetts Lowell, which teamed up with Northeastern University in Boston, Mass., took delivery of their fancy new robot, as did MIT and the University of Edinburgh in Scotland. We talked to Holly Yanco at UMass Lowell and Taskin Padir at Northeastern, along with Sethu Vijayakumar at Edinburgh and Russ Tedrakeat MIT, about what it’s like to have a smokin’ hot space robot show up on your doorstep in a bunch of pieces. We also asked them what they’ve told NASA that they’re going to do with it, and what they actually plan to do with it. NASA, you will be happy to hear that these last two things are only slightly different.

when we first met Valkyrie at NASA’s Johnson Space Center in Houston back in 2013, we were told that it was designed to be easy to take apart and then reassemble. It’s pretty cool to see this modularity in action; here’s a video of how Valkyrie got put together in Massachusetts:

Loyal Valkyrie fans will have immediately noticed that this version of Valkyrie (a “Unit D”) has some upgrades. Most notably, the robot’s head has been redesigned in order to accommodate a Multisense SL camera and LIDAR array, the same kind of “head” that ATLAS has. Also, the cameras in Valkyrie’s legs have been removed, the range of motion of the pelvis has been increased, and the fabric leg covers have been replaced with a plastic shell that incorporates new fans to help keep the robot cool as it attempts more dynamic walking tasks. There are some other minor upgrades to improve Valkyrie’s modularity and make the batteries safer, but the big deal is the Multisense, since it will allow people with experience doing perception on ATLAS to translate much more easily to working with Valkyrie.

This particular Valkyrie will live at the NERVE (New England Robotics Validation and Experimentation) Center at UMass Lowell, which is basically a big playground for robots, by which I mean a test area for robots. This is ideal, since part of NASA’s grant involves providing access to teams participating inNASA’s Space Robotics Challenge.

Meanwhile, across the pond, another Valkyrie arrived in pieces at TheEdinburgh Centre for Robotics at the University of Edinburgh, and was put back together again.


Zero Zero’s Camera Drone Could Be a Robot Command Center in the Future

Startup Zero Zero Robotics just took the wraps off its eye in the sky, the Hover Camera. The company hasn’t set a price but expects the lightweight drone (it weighs in at 240 grams) to cost under US $600.

The flying camera is a relatively new type of gadget. It all started about a year ago, when startup Lily Camera came out of stealth with its $500 to $1000 camera drone and argued that it wasn’t so much a drone as a simple-to-use flying camera. This March, drone-maker DJI introduced the Phantom 4, with autonomous flying and tracking features that essentially make it that company’s first flying camera at $1400.

Flying cameras are drones designed for use by consumers that don’t want to learn how to fly a drone; they just want to take pictures. The cameras have tracking capabilities so they can keep a subject in sight, and can autonomously hover or circle, as well as take off and land on command without the user having to control the ascent or descent precisely.

People are betting big on these companies. Lily, with founders out of UC Berkeley, has $15 million in funding and $34 million in preorders. Zero Zero, with founders out of Stanford, has $25 million in funding.

One—or perhaps more—of these gadgets will catch on. In a few weeks, I’ll be attending my son’s high school graduation in Silicon Valley, with, I’m sure, my view obscured by parents using pads and phones and selfie-sticks to record the moment. By next spring, I’m betting at least a few of the selfie-sticks and tripods are going to be replaced by camera drones. I’m not sure if that’s going to be more or less annoying.

“It has two cameras. The front viewing camera is a 13-megapixel camera that records video, but also has Simultaneous Localization and Mapping (SLAM), an algorithm that allows it to determine where it is. It also has a down-facing video camera, running an algorithm called optical flow, that looks at ground at 60 frames per second, so the Hover knows when it moves and can correct itself. These visual sensors are giving inputs of actual position and speed, meanwhile, the accelerometer and gyroscope gives relative position. All these signals are fed into the flight control algorithm, so when I throw it up in air, it can just hover there.

“When I want it to follow me around, it is using facial and body recognition to follow me and make sure I’m in the frame. It can follow anybody I choose. In the final version, though not just yet, it will do a 360 scan around itself and pull out all the faces, they pop up on my phone, then I can choose which person to follow automatically. Or I can control it manually with swipes and other gestures.

“This approach differs from the Lily Camera and the Phantom 4. Lily does most of its tracking with GPS, so you have to wear a device on your wrist.

“The Phantom 4 is running a lot of visual computation, but it relies on motion tracking, that lets it follow a car, say. We are running body and face recognition.”

Zero Zero has 1000 preproduction models built; it’s using some for its own testing, but plans to give 200 out to beta testers, that it will select from applicants who commit to a purchase of a production unit down the line. It expects to ramp up production and start taking preorders in the summer, with the drones widely distributed by the 2016 holiday season.


First autonomous robot to operate on soft tissue outdoes human surgeons

Step aside, Ben Carson. The once lauded ability to perform delicate operations with gifted hands may soon be replaced with the consistent precision of an autonomous robot. And—bonus—robots don’t get sleepy.

In a world’s first, researchers report using an autonomous robot to perform surgical operations on soft tissue and in living pigs, where the adroit droid stitched up broken bowels. The researchers published the robotic reveal in the journal Science Translational Medicine, and they noted the new machinery surpassed the consistency and precision of expert surgeons, laparoscopy, and robot-assisted (non-autonomous robotic) surgery.

The authors, led by Peter Kim at Children’s National Health System in Washington, DC, emphasized this feat is not intended to be a step toward completely replacing surgeons. Rather, they want the technology to provide new tools that help every operation go smoothly. “By having a tool like this and by making the procedures more intelligent, we can ensure better outcomes for patients,” Kim said.

Kim and his colleagues aren’t the first to use robotics or even autonomous robots in surgery, of course. But non-autonomous robots have yet to offer the quality assurance for every operation that doctors and engineers had hoped for. And autonomous robots have so far only made themselves useful for digging into rigid body parts, such as bones, while historically failing with slippery, wiggly soft tissue. Those squishy innards pose a particular challenge to autonomous robots because they easily move around and look alike, making it difficult for the machinery to keep track of and manipulate all the bits and slices.

To get around the problem, Kim and his team started with preexisting autonomous robots, which look much like a mechanical arm, and added new imaging features. The new robot is called STAR, for Smart Tissue Autonomous Robot, and it includes a 3D visual tracking system and a custom near-infrared fluorescent (NIRF) imaging system. The 3D system works by having an array of microlenses that triangulate the spatial position of every pixel in an image. And the NIRF system allows the robot to precisely spot and track the tissue in need of surgical work using luminescent markers—those glowing tissue tags are added by doctors prior to the surgery.

With the spatially informed robot, the researchers next boosted the machine’s dexterity by adding an articulated suturing tool, with eight degrees of freedom, that can sew up tissue in tight spaces. They also added an extra sensor that ensures the proper tension for each stitch and fed the robot a suturing algorithm based on expert techniques.

In tests with out-of-body tissues, STAR met or exceeded the performance of other surgical methods in terms of metrics like needle placement, stitch spacing and tension, the number of mistakes, and the potential for the seam to leak. It’s like a “smart sewing machine,” Kim jokes.

In a test on four live pigs, STAR successfully reconnected sliced intestines, a procedure generally called anastomosis. Such a procedure for joining tubular body parts is used in operations like reconstructive bowel surgery and blood vessel repair.

Despite its accuracy, STAR took considerably longer to perform the anastomosis than a human surgeon, averaging around 50 minutes, while a live doctor took about eight minutes. The authors explain that they took their time during the initial tests, which they liken to parents cautiously watching their child learn to walk.

This is a proof-of-concept, Kim said. It will likely be years before STAR shows up in a real operating room, but autonomous robotic soft tissue surgery is on the horizon.


650V fast body diode mosfets for soft switching

Vishay 650V fast body diode mosfet“They provide additional voltage headroom for industrial, telecom, and renewable energy applications when desired,” said the firm.

Built on E Series super-junction technology, they feature 10x lower reverse recovery charge (Qrr) than standard mosfets, which allows the devices to block the full breakdown voltage more quickly, helping to avoid failure from shoot-through and thermal overstress.

Also, reliability is increased in zero voltage switching (ZVS) or soft switching topologies, such as phase-shifted bridges, LLC converters, and three-level inverters.

21A SiHx21N65EF is offered in five packages, while the 28A SiHx28N65EF and 33A SiHG33N65EF are each available in two.

On-resistance is down to 157, 102, and 95mΩ, respectively – see table below for capacitance.

The devices are designed to withstand high energy pulses in the avalanche and commutation mode with guaranteed limits through 100 % UIS testing.

Applications are foreseen in solar inverters, servers and telecom power systems, ATX/Silver box PC PSUs, welding equipment, un0interruptable power supplies, battery chargers, electric vehicle (EV) charging, LED lighting, high-intensity discharge (HID) lighting and fluorescent lighting


3D technique promises accurate coloured thermoforming mouldings

A new form of thermoforming is the answer, according to Swiss University ETH.

In thermoforming, a plastic sheet is warmed to near melting point and sucked onto a mould – it is the technique used to make yoghurt pots.

“The new method is a clever combination of established thermoforming and software which allows even ambitious amateurs to produce individual pieces or small batches of objects with structurally complex and coloured surfaces quickly and cheaply,” said the university.

There are two parts to the process, both starting with the same 3D virtual model, and both using an accurate simulation of material flow during thermo-forming created by researcher Christian Schüller at ETH’s Interactive Geometry Lab.

Part one uses conventional 3D printing to make a mould of polylactic acid (PLA), from which a secondary, heat-resistant, thermo-forming plaster mould is made.

ETH Zurich 3d multi-colour car print 780In part two, the software pre-distorts the required 3D coloured surface into a 2D image, which is printed onto special transfer paper using a standard laser printer. Pressure and heat allow this image to be transferred onto the surface of a flat plastic sheet.

When the flat plastic sheet is thermo-formed over the plaster mould, the outside of the plastic sheet ends up following the contours of the original 3D virtual model, and its colours stretch into their proper un-distorted places.

“The deformation of the plastic changes the printed image. But our software accurately calculates and compensates for this deformation,” said Schüller.

The thermoforming has been tested with complex objects including a Chinese mask and various model-making components, such as a car body shell and food replicas.

Teeth in the original mask are decorated with gold paint,” said Schüller. “This detail is reproduced exactly in the copy. The surface has a high-quality look, and the colors and structure are almost identical to those of the original.”

Numerous copies can be made by using the plaster cast multiple times and, “the replica has a high quality appearance, and for many applications it’s cheaper and faster than today’s 3D color printing process,” says Schüller.

The researchers are convinced the thermoforming method can be used in industrial applications to mould prototypes before large-scale production, and that architectural firms and modelers could also benefit.

Naturally high surface gloss makes the process less suitable for reproducing wood or stone surfaces, said the university.

The work will be presented at ACM Siggraph in California in July


Intel overtakes ST for third place in industrial semi market

TI was the leading vendor of semiconductors to the industrial sector in 2015, ahead of Infineon Technologies, says Semicast Research.

Intel passed ST to become the third largest vendor, with Renesas Electronics completing the top five.

Semicast defines the industrial sector to include traditional areas such as factory automation, motor drives, lighting, building automation, test & measurement and power & energy, as well as medical electronics and industrial transportation; the aerospace & defense sector is excluded from the analysis. Using this definition, Semicast estimates that revenues for industrial semiconductors totaled $40.7 billion in 2015.

Semicast’s industrial semiconductor vendor share analysis ranks TI as the leading supplier in 2015, with an estimated market share of 8.1%, ahead of Infineon with 6.8%, Intel (4.9%), STMicroelectronics (4.4%) and Renesas (3.8%).

Colin Barnden, Principal Analyst at Semicast Research and study author, commented “In practice the industrial sector is a collection of markets within a market and is heavily fragmented across applications, OEMs and regions. Accordingly, it has no dominant semiconductor vendor, with the top ten together accounting for only around 40% of the total. The industrial semiconductor market is also fragmented across product types, with the three largest categories (analog, optoelectronics and MCU/MPU) accounting for around two-thirds of revenues, but with no one vendor strong in all three areas.”

2015 was a record year for M&A activity in the semiconductor industry and this has influenced vendor rankings in the industrial sector too. Infineon’s acquisition of International Rectifier in January 2015 consolidated its position as number two supplier; Intel’s acquisition of Altera at the end of December 2015 raised it above STMicroelectronics to third; NXP’s acquisition of Freescale Semiconductor in early December 2015 secured the combined company seventh position in the vendor ranking. In contrast, TI has not undertaken any significant mergers and acquisitions activity since the purchase of National Semiconductor almost five years ago, and has instead focused on organic sales growth.

Changes in the value of the Euro and Yen relative to the US Dollar have also impacted the vendor share ranking. Compared with 2014, the Euro was an average of sixteen percent weaker against the US Dollar in 2015, while the Yen was almost thirteen percent weaker.

Revenues for industrial semiconductors have now doubled since 2009, compared with growth of around fifty percent for the total semiconductor market over the same period. Barnden summed up “Companies that may have dismissed the industrial sector in the past would be advised to take a closer look, particularly as medium term growth prospects have slowed in other sectors, such as mobile and PC.”

While supply to the industrial sector is led by some of the largest semiconductor vendors, the diverse nature of the application and customer base means there is room for many.


Microchip’s hardware-encrypted micro for IoT

Microchip has launched a hardware crypto-enabled 32-bit microcontroller which can add security to IoT devices, offering encryption and authentication.

The chip allows for pre-boot authentication of the system firmware in order to ensure that the firmware is untouched and uncorrupted, thereby preventing security attacks such as man-in-the-middle, denial-of-service and back-door vulnerabilities.

It can also be used to authenticate firmware updates, protecting the system from malware or memory corruption.

The Device offers private key and customer programming flexibility with a full-featured micro controller in a single-package solution in order to minimise customer risk.

The device provides savings in terms of power drain and also improved execution of application performance. In addition, since the CEC1302 is a full 32-bit microcontroller with an ARM Cortex-M4 core, adding security functionality only results in a small additional cost.

The CEC1302 can be used as a standalone security coprocessor or can replace an existing microcontroller. The hardware-enabled public key engine of the device is also 20 to 50 times faster than firmware-enabled algorithms, and the hardware-enabled hashing is 100 times faster.

In order to quickly develop applications based on the CEC1302, MikroElektronika’s CEC1302 Clicker (MIKROE-1970) and CEC1302 Clicker 2 (MIKROE-1969) boards can be used with MikroElektronika’s complete development toolchain for Microchip CEC1302 ARM Cortex-M4 MCUs which includes compilers, development boards, programmers/debuggers or with standard third-party ARM® MCU toolchains.


Engineers create a better way to boil water, with industrial, electronics applications

Engineers at Oregon State University have found a new way to induce and control boiling bubble formation, that may allow everything from industrial-sized boilers to advanced electronics to work better and last longer.

Advances in this technology have been published in Scientific Reports and a patent application filed.

The concept could be useful in two ways, researchers say — either to boil water and create steam more readily, like in a boiler or a clothing iron; or with a product such as an electronics device to release heat more readily while working at a cooler temperature.

“One of the key limitations for electronic devices is the heat they generate, and something that helps dissipate that heat will help them operate at faster speeds and prevent failure,” said Chih-hung Chang, a professor of electrical engineering in the OSU College of Engineering. “The more bubbles you can generate, the more cooling you can achieve.

“On the other hand, if you want to create steam at a lower surface temperature, this approach should be very useful in boilers and improve their efficiency. We’ve already shown that it can be done on large surfaces and should be able to scale up in size to commercial use.”

The new approach is based on the use of piezoelectric inkjet printing to create hydrophobic polymer “dots” on a substrate, and then deposit a hydrophilic zinc oxide nanostructure on top of that. The zinc oxide nanostructure only grows in the area without dots. By controlling both the hydrophobic and hydrophilic structure of the material, bubble formation can be precisely controlled and manipulated for the desired goal.

This technology allows researchers to control both boiling and condensation processes, as well as spatial bubble nucleation sites, bubble onset and departure frequency, heat transfer coefficient and critical heat flux for the first time.

In electronics, engineers say this technology may have applications with some types of solar energy, advanced lasers, radars, and power electronics — anywhere it’s necessary to dissipate high heat levels.

In industry, a significant possibility is more efficient operation of the steam boilers used to produce electricity in large electric generating facilities.

This work was supported by the OSU Venture Development Fund and the Scalable Nanomanufacturing Program of the National Science Foundation


Twitter can help teachers engage with students

Get ready to say good morning to Twitter in classroom soon as the micro-blogging can help teachers engage students in a more efficient way and better prepare them to take on New-Age challenges, researchers reveal.

Twitter, if used properly, can produce better outcomes among middle school students and enhance the way children learn in the 21st century.

“Our work adds a critical lens to the role of open social networking tools such as Twitter in the context of adolescents’ learning and raises new questions about the potential for social media as a lever for increasing the personalisation of education,” explained Penny Bishop, professor and director of the Tarrant Institute for Innovative Education at University of Vermont.

Lead researcher Ryan Becker used his middle school science classes to conduct the research in conjunction with co-author Bishop.

Becker found that 95 per cent of his students agreed or strongly agreed that Twitter enabled them to follow real science in real time as it develops around the world.

Particularly motivating was the ability to interact via Twitter with leading organisations like the US space agency NASA and science-related programmes.

The findings highlight the potential of Twitter as a means to personalise learning and to expand secondary students’ encounters with science professionals and organisations.

The study revealed that 93 per cent of students surveyed think Twitter enabled them to interact and share perspectives with a global audience outside the classroom.

“When I have something important to share about science that I like, as many as 52 people (Twitter followers) can see what I tweet instantly,” said one student.

Another student said they use Twitter for academic support by tweeting with other students about concepts, assignments and projects.

Ninety-one per cent said Twitter helped them make connections between science and their own lives and interests.

“Twitter has made me think about things that I like and had me think about the science related to them,” added another student.

Others said Twitter helped them learn about science in new ways that related to their everyday lives.

Additionally, 81 per cent of students agreed that Twitter helped them think creatively about new ways to communicate science.

Twitter is also an extremely powerful assessment tool, according to Becker, who recommends displaying tweets on an electronic “smart” board so students and teachers can assess and discuss them together.

Teachers can also ask students to tweet examples of specific scientific concepts like the students in Becker’s class who tweeted personal examples of Newton’s First Law.

Teachers can also have students respond to scientific poll questions and share instant results with their class.

Students continued to tweet outside of class making certain topics a constant conversation.

The 140-character limit also forces students to distill down major concepts like “what is chemistry,” Becker noted in a paper forthcoming in Middle School Journal.