英语家园

 找回密码
 注册

QQ登录

只需一步,快速开始

扫一扫,访问移动社区

搜索

You’re trying to figure out your location

发布者: 五毒 | 发布时间: 2022-4-19 06:30| 查看数: 310| 评论数: 0|



Teaching robots to do the simplest things is both difficult and time-consuming. This video, for example, shows an Android attempting to fold a T-shirt relying on a special written code.

As it does, it needs to take into account the size, texture and algorithm. For artificial intelligence, it’s a complex operation. But a computer science team at the University of Maryland at College Park is working to help robots perform better by using experience from their past actions.

Today, many of the robots in many applications are quite slow, because they do not have the ability to predict one signal on the basis of another. Now, with creating perceptual memories, the robots can use prior experience in order to do things very fast.

Combining robotic perception and motor commands is an ambitious task, one that Anton Mitrokhin and his team are tackling with, the so called hyper dimensional computer theory.

They say it will help different kinds of artificial intelligence perform better. For instance, they say smartphones will be able to analyze the situation and accumulate experience. There are many applications of neural networks when it comes to mobile devices.

For example, you’re trying to figure out your location. You can take a picture of your surroundings and the device will tell you where you are. Drones will be able to analyze their own surroundings and won’t need to be directed by a human pilot.

The University of Maryland scientists admit they’ve taken on an ambitious project and that there’s still a lot of work ahead. But Aloimonos hopes to have a big impact by creating one system that all AI developers can use.

Somehow, the whole operation of trying to develop an intelligence system resembles the biblical Babel tower where you have all these people that speak different languages.

Now the new framework that we introduced allows us to have one language, the language of the hyper dimensional vectors to express all the information that is there in perception, decision, cognition and motion. Another dramatic change, robots’ eyes.

Scientists say, they are not just playing cameras anymore, but dynamic vision sensors that respond to movement and light. The mechanisms behind it works similarly to those of human eye sending non-stop signals to the robotic brain that gets recorded and stored, forming the equivalent of live experience. And that scientists say brings robot a lot closer to humans.

Nastassia Jaumen, for VOA News, College Park Maryland.


最新评论

快速回复 返回顶部 返回列表