‘Consciousness’ in robots was once taboo. Today is the last word – 04/18/2023 – Tech

‘Consciousness’ in robots was once taboo.  Today is the last word – 04/18/2023 – Tech

[ad_1]

Hod Lipson, a mechanical engineer who runs the Creative Machines Lab at Columbia University, has shaped most of his career around what some in his industry call the “c-word.”

On a sunny morning last October, the Israeli-born roboticist sat behind a desk in his lab and explained himself. “That subject was taboo,” he said, with a smile that exposes a small gap between his front teeth. “We were almost forbidden to talk about it – ‘Don’t talk about the ‘c-word’; you won’t get tenure’ – so at first I had to play it off as something else.”

This was in the early 2000s, when Lipson was an assistant professor at Cornell University. He was working to create machines that could sense if there was something wrong with their own hardware — a broken part or faulty wiring — and then change their behavior to compensate for the deficiency, without a programmer’s guidance. Just like when a dog loses a leg in an accident it can learn to walk again in a different way.

This kind of built-in adaptability, Lipson argued, would become more important as we become more reliant on machines. Robots were being used for surgical procedures, food manufacturing and transportation; the applications for the machines seemed practically endless, and any error in their functioning, as they became more integrated into our lives, could spell disaster. “We are literally going to hand our lives over to a robot,” he said. “We want these machines to be resilient.”

One way to do this was to draw inspiration from nature. Animals, and especially humans, are good at adapting to change. This ability could be the result of millions of years of evolution, as resilience in response to injury and changes in the environment often increases an animal’s chances of surviving and reproducing. Lipson wondered if he could replicate this kind of natural selection in his code, creating a generalizable form of intelligence that could learn about its body and function, no matter what it looked like and did.

This type of intelligence, if possible to be created, would be flexible and fast. It would be just as good in a tight spot as humans would be—better, even. And as machine learning became more powerful, that goal seemed to become achievable. Lipson gained stability and his reputation as a creative and ambitious engineer grew. Then, over the past two years, he began to articulate his fundamental motivation for doing all this work. He started saying the “c-word” out loud: he wants to create sentient robots.

The Creative Machines Lab on the first floor of the Seeley W. Mudd Building in Columbia is organized into boxes. The room itself is a box, divided into square workstations lined with cubicles. Within that order, robots and robot parts are scattered around. A blue face staring down from a shelf; a green spider-like machine spreading its legs out of a basket on the floor; a delicate dragonfly robot balanced on a work table. They are the evolutionary remnants of mechanical minds.

The first difficulty in studying the “c-word” is that there is no consensus on what it actually refers to. This is also the case for many vague concepts, such as freedom, meaning, love and existence, but this domain is usually reserved for philosophers, not engineers. Some people have tried to taxonomy consciousness, explaining it by pointing to functions in the brain or some more metaphysical substances, but these efforts are hardly conclusive and give rise to further questions. Even one of the most widely shared descriptions of so-called phenomenal consciousness – an organism is conscious “if there is something it is like to be that organism,” as the philosopher Thomas Nagel put it – can seem unclear.

Entering these murky waters directly may seem fruitless to robotics experts and computer scientists. But, as Antonio Chella, a roboticist at the University of Palermo in Italy, put it, unless consciousness is taken into account, “something seems to be missing” in the workings of intelligent machines.

Trying to render the flexible c word using tractable inputs and functions is a difficult, if not impossible, task. Most roboticists and engineers tend to ignore philosophy and form their own functional definitions. Thomas Sheridan, an emeritus professor of mechanical engineering at the Massachusetts Institute of Technology, said he believes that consciousness can be reduced to a certain process and that the more we discover about the brain, the less nebulous the concept will seem. “What started out as scary and kind of religious ends up being straightforward, straightforward science,” he said.

(These views are not reserved for roboticists. Philosophers such as Daniel Dennett and Patricia Churchland and the neuroscientist Michael Graziano, among others, have put forward a variety of functional theories of consciousness.)

Lipson and the members of the Creative Machines Lab are part of that tradition. “I need something that’s totally buildable, dry, non-romantic, just nuts and bolts,” he said. He established a practical criterion for consciousness: the ability to imagine oneself in the future.

The benefit of taking a stand on a functional theory of consciousness is that it allows for technological advancement.

One of the first self-aware robots to emerge from the Creative Machines Lab had four articulated legs and a black body with sensors attached at different points. By moving around and watching how the information coming into its sensors changed, the robot created a simulation of itself. As the robot continued to move, it used a machine learning algorithm to improve the fit between its self-model and its real body. The robot used this self-image to figure out, in simulation, a way forward. Then he applied this method to his body; he had discovered how to walk without being shown how to walk.

That represented a big step forward, said Boyuan Chen, a roboticist at Duke University who worked at the Creative Machines Lab. “In my previous experience, whenever you trained a robot to perform a new function there was a human right there,” he said.

Recently, Chen and Lipson published an article in the journal Science Robotics that revealed their newest self-aware machine, a simple two-jointed arm that was attached to a table. Using cameras set up around it, the robot watched itself as it moved – “like a baby in a crib, looking at itself in a mirror,” Lipson said. Initially, it had no sense of where it was in space, but over the course of a few hours, with the help of a powerful deep learning algorithm and a probability model, it was able to identify itself in the world. “He has this sense of himself, a cloud,” Lipson said.

But was it really conscious?

The risk of committing to any theory of consciousness is that doing so opens up the possibility of criticism. Sure, self-awareness sounds important, but aren’t there other key features of awareness? Can we call something conscious if it doesn’t seem conscious to us?

Chella believes that consciousness cannot exist without language, and has been developing robots that can form internal monologues, reasoning with themselves and reflecting on the things they see around them. One of her robots recently managed to recognize itself in a mirror, passing what is probably the most famous test of animal self-awareness.

Joshua Bongard, a roboticist at the University of Vermont and a former member of the Creative Machines Lab, believes that consciousness is not just cognition and mental activity, but has an essentially bodily aspect.

Last summer, around the same time that Lipson and Chen released their newest robot, a Google engineer claimed that the company’s newly improved chatbot, called LaMDA, was sentient and deserved to be treated like a small child. This claim was met with skepticism, mainly because, as Lipson noted, the chatbot was processing “code written to complete a task”. There was no underlying structure of consciousness, just the illusion of consciousness, other researchers said. Lipson added, “The robot was not self-aware. It’s almost cheating.”

Translated by Luiz Roberto M. Gonçalves

[ad_2]

Source link

tiavia tubster.net tamilporan i already know hentai hentaibee.net moral degradation hentai boku wa tomodachi hentai hentai-freak.com fino bloodstone hentai pornvid pornolike.mobi salma hayek hot scene lagaan movie mp3 indianpornmms.net monali thakur hot hindi xvideo erovoyeurism.net xxx sex sunny leone loadmp4 indianteenxxx.net indian sex video free download unbirth henti hentaitale.net luluco hentai bf lokal video afiporn.net salam sex video www.xvideos.com telugu orgymovs.net mariyasex نيك عربية lesexcitant.com كس للبيع افلام رومانسية جنسية arabpornheaven.com افلام سكس عربي ساخن choda chodi image porncorntube.com gujarati full sexy video سكس شيميل جماعى arabicpornmovies.com سكس مصري بنات مع بعض قصص نيك مصرى okunitani.com تحسيس على الطيز