IL RISVEGLIO DEL CADUCEO DORMIENTE: la vera genesi dell'Homo sapiens

IL RISVEGLIO DEL CADUCEO DORMIENTE: la vera genesi dell'Homo sapiens
VIDEO TRAILER

VIDEO SINOSSI DELL'UOMO KOSMICO

VIDEO SINOSSI DELL'UOMO KOSMICO
VIDEO SINOSSI DELL' UOMO KOSMICO
Con questo libro Marco La Rosa ha vinto il
PREMIO NAZIONALE CRONACHE DEL MISTERO
ALTIPIANI DI ARCINAZZO 2014
* MISTERI DELLA STORIA *

con il patrocinio di: • Associazione socio-culturale ITALIA MIA di Roma, • Regione Lazio, • Provincia di Roma, • Comune di Arcinazzo Romano, e in collaborazione con • Associazione Promedia • PerlawebTV, e con la partnership dei siti internet • www.luoghimisteriosi.it • www.ilpuntosulmistero.it

LA NUOVA CONOSCENZA

LA NUOVA CONOSCENZA

GdM

giovedì 12 settembre 2013

INVENZIONI E APPLICAZIONI "6": ROBOTICA E RILIEVI FOTOGRAMMETRICI


di: Dott. Giuseppe Cotellessa (ENEA)



“Anche nel campo della robotica l'applicazione del brevetto può risultare molto utile”.

“Robotica: sperimentazione e messa a punto di competenza di frontiera”

La linea di attività si propone di potenziare la competitività dell’ENEA sulle tematiche innovative nel settore della Robotica autonoma per modulare e rendere più incisiva la risposta alle esigenze del Paese. Gli strumenti sono lo sviluppo e il consolidamento di una competenza di eccellenza, in collaborazione con una rete di partner nazionali ed internazionali, accademici ed industriali.
La Robotica si sta affermando sullo scenario internazionale come una tra le più rilevanti rivoluzioni tecnologiche i cui prodotti spazieranno in settori applicativi che andranno dall’assistenza personale all’entertainment, dalla sicurezza allo spazio.
La linea di attività mira a rafforzare il ruolo dell’ENEA nel panorama nazionale ed internazionale nelle attività di ideazione, sviluppo e trasferimento al mondo produttivo di tecnologie innovative nel settore della Robotica.
Le tematiche di ricerca di maggior rilievo che costituiranno l’asset tecnologico del programma vengono individuate nelle seguenti:

Reti Neurali in grado di affrontare il problema della comprensione contestuale dell’ambiente e di semplici situazioni e quello, ad esso collegato, di elaborare decisioni dipendenti dal contesto. Le attuali reti non sono in grado di compiere l’estensione concettuale che porta alla comprensione contestuale, ma le idee e le tematiche messe a punto all’interno dell’unità sembrano a questo riguardo promettenti;

Metodi di Swarm e Collective Intelligence e Sistemi Robotici Cooperanti, mirati alla esplorazione di ambienti totalmente sconosciuti e/o pericolosi (aree disastrate, inquinate, minate ecc.) o di ambienti estesi caratterizzati da livelli di dettaglio molto spinti (es.: monitoraggio di coltivazioni, esplorazione planetaria);

Metodi di Comunicazione Multicanale tra uomo e robot che consentano, eventualmente con l’ausilio delle Reti Neurali, una comprensione contestuale degli intenti dell’operatore umano, aggiungendo al contenuto semantico del linguaggio correttori interpretativi dovuti allo stato di eccitazione fisica, alla stanchezza, alla gestualità ed altri parametri che sono già oggi rilevabili o parzialmente rilevabili con le tecniche esistenti.
Sono stati individuati 2 ambiti applicativi particolarmente promettenti sia per l’interesse del Sistema Paese che per l’attrattività e l’ampiezza dei potenziali mercati:
le tecnologie di intelligenza ed automazione per Sicurezza della popolazione;
l’area della Assistenza alla Persona (Personal Care).
Altri campi applicativi già esplorati in attività concluse od in via di conclusione, quali per esempio i Beni Artistici e Culturali, rappresentano consolidati settori di ulteriore potenziale coinvolgimento.
Sono previste le seguenti realizzazioni:



sistemi di “pelle artificiale”;

prototipo di telecamera acustica tridimensionale per acque torbide, idoneo non solo per le applicazioni di tipo energetico (manutenzione di impianti estrattivi sottomarini) ma anche per applicazioni nel settore della sorveglianza costiera e di punti di accesso sensibili;

sistemi di controllo software in grado di adattarsi alle condizioni ambientali e in particolare di resistere ad aggressioni intenzionalmente distruttive;

sistemi di monitoraggio e intervento antiterroristico in vari scenari operativi;

software di integrazione multisensoriale per il supporto agli agenti di sorveglianza sempre in ambito di sorveglianza aeroportuale;


sistemi minisub autonomi per la sorveglianza delle coste ed è allo studio la realizzazione di un rover per il rilevamento delle mine attraverso l’impiego integrato di sensori di tipo innovativo.

_________________________________________________________________________________
_________________________________________________________________________________
_________________________________________________________________________________

“Anche nel rilievo fotogrammetrico l'applicazione del procedimento brevettato può risultare utilissimo”.

RILIEVI FOTOGRAMMETRICI

Strumentazione:
Stazione totale ZEISS mod. rec elta rl/s con teodolite e distanziometro ad impulsi, palina e prisma;
Camera fotografica metrica ROLLEIFLEX 6008,
Obiettivo metrico ZEISS DISTAGON 4/40mm;
Software ARCHIS per raddrizzamento foto, software STEREOMETRIC per visione 3D, con occhiali cristal eyes;
Scanner mustek 1200 A4 colore, completo di driver e software TWAIN DIRECT 32 OCR.
RilFotMetr1
Foto del prospetto realizzata con macchina metrica



Obiettivi dell’indagine:
Identificazione dettagliata e completa della struttura edilizia in esame.
Il risultato finale fornirà notizie riguardanti la vista in pianta, i vari prospetti, la ricostruzione 3D e l’analisi del degrado strutturale dell’edificio in esame.
 RilFotMetr2
 Foto raddrizzata con metodo analitico
tramite punti di controllo


Metodologia di indagine:
Il sistema per la fotogrammetria digitale è utilizzato per creare una documentazione che permetta l'identificazione dettagliata e completa della struttura edilizia in esame.
In particolare sono fornite informazioni riguardanti la planimetria, le sezioni, i prospetti, l'eventuale  ricostruzione 3D, la tematizzazione dei materiali e l'analisi del degrado dell'edificio in esame. 
La fotogrammetria è una tecnica di rilievo che permette, attraverso la digitalizzazione delle immagini, l'acquisizione delle dimensioni geometriche e di tutte le informazioni necessarie ad un rilievo accurato.

Tale tecnica viene eseguita mediante il rilievo topografico, il raddrizzamento delle immagini e la restituzione. Il raddrizzamento avviene secondo un modello matematico di ricostruzione con l'ausilio del PC e di un software dedicato.
La restituzione grafica di ogni singolo prospetto definisce ogni particolare costruttivo rappresentando mattoni, infissi, pietre, malta ecc. Una rappresentazione grafica fedele consentirà ai restauratori di avere il maggior numero di informazioni per poter intervenire sul degrado del soggetto in esame.

RilFotMetr3
Prospetto elaborato al CAD mediante il rilievo topografico diretto e la restituzione vettoriale fotogrammetrica con l’analisi dello stato attuale del degrado


Applicazioni:
Rilievo fotogrammetrico eseguito sul prospetto principale del Palazzo Agresta D’Alessandro a Rotondella(MT), con la restituzione vettoriale riportante l’individuazione dei materiali e l’analisi dello stato attuale del degrado
RilFotMetr4
Prospetto con individuazione dei materiali

SE TI E' PIACIUTO QUESTO POST NON PUOI PERDERE:

LA VERA STORIA EVOLUTIVA DELL'UOMO E' COME CI HANNO SEMPRE RACCONTATO? OPPURE E' UNA STORIA COMPLETAMENTE DIVERSA?

"L'UOMO KOSMICO", TEORIA DI UN'EVOLUZIONE NON RICONOSCIUTA
DI MARCO LA ROSA
E' UN'EDIZIONE OMPHILABS
ACQUISTABILE DIRETTAMENTE DAL SITO OMPHILABS ED IN LIBRERIA

http://www.omphilabs.it/prod/L-UOMO-KOSMICO.htm



26 commenti:

Marco La Rosa ha detto...

COMMENTO PERVENUTO VIA MAIL DA DOTT. COTELLESSA SU "ROBOTICA":

PRIMA PARTE

Neuromorphic computing



The machine of a new soul

Computers will help people to understand brains better. And understanding brains will help people to build better computers





http://cdn.static-economist.com/sites/default/files/imagecache/full-width/images/print-edition/D/20130803_STD001_0.jpg

ANALOGIES change. Once, it was fashionable to describe the brain as being like the hydraulic systems employed to create pleasing fountains for 17th-century aristocrats’ gardens. As technology moved on, first the telegraph network and then the telephone exchange became the metaphor of choice. Now it is the turn of the computer. But though the brain-as-computer is, indeed, only a metaphor, one group of scientists would like to stand that metaphor on its head. Instead of thinking of brains as being like computers, they wish to make computers more like brains. This way, they believe, humanity will end up not only with a better understanding of how the brain works, but also with better, smarter computers.

These visionaries describe themselves as neuromorphic engineers. Their goal, according to Karlheinz Meier, a physicist at the University of Heidelberg who is one of their leaders, is to design a computer that has some—and preferably all—of three characteristics that brains have and computers do not. These are: low power consumption (human brains use about 20 watts, whereas the supercomputers currently used to try to simulate them need megawatts); fault tolerance (losing just one transistor can wreck a microprocessor, but brains lose neurons all the time); and a lack of need to be programmed (brains learn and change spontaneously as they interact with the world, instead of following the fixed paths and branches of a predetermined algorithm).

To achieve these goals, however, neuromorphic engineers will have to make the computer-brain analogy real. And since no one knows how brains actually work, they may have to solve that problem for themselves, as well. This means filling in the gaps in neuroscientists’ understanding of the organ. In particular, it means building artificial brain cells and connecting them up in various ways, to try to mimic what happens naturally in the brain.

SEGUE SECONDA PARTE

Marco La Rosa ha detto...

SECONDA PARTE:

Analogous analogues

The yawning gap in neuroscientists’ understanding of their topic is in the intermediate scale of the brain’s anatomy. Science has a passable knowledge of how individual nerve cells, known as neurons, work. It also knows which visible lobes and ganglia of the brain do what. But how the neurons are organised in these lobes and ganglia remains obscure. Yet this is the level of organisation that does the actual thinking—and is, presumably, the seat of consciousness. That is why mapping and understanding it is to be one of the main objectives of America’s BRAIN initiative, announced with great fanfare by Barack Obama in April. It may be, though, that the only way to understand what the map shows is to model it on computers. It may even be that the models will come first, and thus guide the mappers. Neuromorphic engineering might, in other words, discover the fundamental principles of thinking before neuroscience does.

Two of the most advanced neuromorphic programmes are being conducted under the auspices of the Human Brain Project (HBP), an ambitious attempt by a confederation of European scientific institutions to build a simulacrum of the brain by 2023. The computers under development in these programmes use fundamentally different approaches. One, called SpiNNaker, is being built by Steven Furber of the University of Manchester. SpiNNaker is a digital computer—ie, the sort familiar in the everyday world, which process information as a series of ones and zeros represented by the presence or absence of a voltage. It thus has at its core a network of bespoke microprocessors.

The other machine, Spikey, is being built by Dr Meier’s group. Spikey harks back to an earlier age of computing. Several of the first computers were analogue machines. These represent numbers as points on a continuously varying voltage range—so 0.5 volts would have a different meaning to 1 volt and 1.5 volts would have a different meaning again. In part, Spikey works like that. Analogue computers lost out to digital ones because the lack of ambiguity a digital system brings makes errors less likely. But Dr Meier thinks that because they operate in a way closer to some features of a real nervous system, analogue computers are a better way of modelling such features.

Dr Furber and his team have been working on SpiNNaker since 2006. To test the idea they built, two years ago, a version that had a mere 18 processors. They are now working on a bigger one. Much bigger. Their 1m-processor machine is due for completion in 2014. With that number of chips, Dr Furber reckons, he will be able to model about 1% of the human brain—and, crucially, he will be able to do so in real time. At the moment, even those supercomputers that can imitate much smaller fractions of what a brain gets up to have to do this imitation more slowly than the real thing can manage. Nor does Dr Furber plan to stop there. By 2020 he hopes to have developed a version of SpiNNaker that will have ten times the performance of the 1m-processor machine.

SEGUE TERZA PARTE

Marco La Rosa ha detto...

TERZA PARTE:

SpiNNaker achieves its speed by chasing Dr Meier’s third desideratum—lack of a need to be programmed. Instead of shuttling relatively few large blocks of data around under the control of a central clock in the way that most modern computers work, its processors spit out lots of tiny spikes of information as and when it suits them. This is similar (deliberately so) to the way neurons work. Signals pass through neurons in the form of electrical spikes called action potentials that carry little information in themselves, other than that they have happened.

Such asynchronous signalling (so called because of the lack of a synchronising central clock) can process data more quickly than the synchronous sort, since no time is wasted waiting for the clock to tick. It also uses less energy, thus fulfilling Dr Meier’s first desideratum. And if a processor fails, the system will re-route around it, thus fulfilling his second. Precisely because it cannot easily be programmed, most computer engineers ignore asynchronous signalling. As a way of mimicking brains, however, it is perfect.

But not, perhaps, as perfect as an analogue approach. Dr Meier has not abandoned the digital route completely. But he has been discriminating in its use. He uses digital components to mimic messages transmitted across synapses—the junctions between neurons. Such messages, carried by chemicals called neurotransmitters, are all-or-nothing. In other words, they are digital.

The release of neurotransmitters is, in turn, a response to the arrival of an action potential. Neurons do not, however, fire further action potentials as soon as they receive one of these neurotransmitter signals. Rather, they build up to a threshold. When they have received a certain number of signals and the threshold is crossed—basically an analogue process—they then fire an action potential and reset themselves. Which is what Spikey’s ersatz neurons do, by building up charge in capacitors every time they are stimulated, until that threshold is reached and the capacitor discharges.

SEGUE QUARTA PARTE

Marco La Rosa ha detto...

QUARTA PARTE:

Does practice make perfect?

In Zurich, Giacomo Indiveri, a neuromorphic engineer at the Institute of Neuroinformatics (run jointly by the University of Zurich and ETH, an engineering university in the city) has also been going down the analogue path. Dr Indiveri is working independently of the HBP and with a different, more practical aim in mind. He is trying to build, using neuromorphic principles, what he calls “autonomous cognitive systems”—for example, cochlear implants that can tell whether the person they are fitted into is in a concert hall, in a car or at the beach, and adjust their output accordingly. His self-imposed constraints are that such things should have the same weight, volume and power consumption as their natural neurological equivalents, as well as behaving in as naturalistic a way as possible.

Part of this naturalistic approach is that the transistors in his systems often operate in what is known technically as the “sub-threshold domain”. This is a state in which a transistor is off (ie, is not supposed to be passing current, and thus represents a zero in the binary world), but is actually leaking a very tiny current (a few thousand-billionths of an amp) because electrons are diffusing through it.

Back in the 1980s Carver Mead, an engineer at the California Institute of Technology who is widely regarded as the father of neuromorphic computing (and certainly invented the word “neuromorphic” itself), demonstrated that sub-threshold domains behave in a similar way to the ion-channel proteins in cell membranes. Ion channels, which shuttle electrically charged sodium and potassium atoms into and out of cells, are responsible for, among other things, creating action potentials. Using sub-threshold domains is thus a good way of mimicking action potentials, and doing so with little consumption of power—again like a real biological system.

Dr Indiveri’s devices also run at the same speed as biological circuits (a few tens or hundreds of hertz, rather than the hyperactive gigahertz speeds of computer processors). That allows them to interact with real biological circuits, such as those of the ear in the case of a cochlear implant, and to process natural signals, such as human speech or gestures, efficiently.

Dr Indiveri is currently developing, using the sub-threshold-domain principle, neuromorphic chips that have hundreds of artificial neurons and thousands of synapses between those neurons. Though that might sound small beer compared with, say, Dr Furber’s putative million-processor system, it does not require an entire room to fit in, which is important if your goal is a workable prosthetic body part.

Unusually, for a field of information technology, neuromorphic computing is dominated by European researchers rather than American ones. But how long that will remain the case is open to question, for those on the other side of the Atlantic are trying hard to catch up. In particular, America’s equivalent of the neuromorphic part of the Human Brain Project, the Systems of Neuromorphic Adaptive Plastic Scalable Electronics, SyNAPSE, paid for by the Defence Advanced Research Projects Agency, is also sponsoring two neuromorphic computers.

SEGUE QUINTA PARTE

Marco La Rosa ha detto...

QUINTA PARTE:

The Yanks are coming

One of these machines is being designed at HRL Laboratories in Malibu, California—a facility owned jointly by Boeing and General Motors. Narayan Srinivasa, the project’s leader, says his neuromorphic chip requires not a single line of programming code to function. Instead, it learns by doing, in the way that real brains do.

An important property of a real brain is that it is what is referred to as a small-world network. Each neuron within it has tens of thousands of synaptic connections with other neurons. This means that, even though a human brain contains about 86 billion neurons, each is within two or three connections of all the others via myriad potential routes.

In both natural brains and many attempts to make artificial ones (Dr Srinivasa’s included) memory-formation involves strengthening some of these synaptic connections and pruning others. And it is this that allows the network to process information without having to rely on a conventional computer program. One problem with building an artificial small-world network of this sort, though, is connecting all the neurons in a system that has a lot of them.

Many neuromorphic chips do this using what is called cross-bar architecture. A cross-bar is a dense grid of wires, each of which is connected to a neuron at the periphery of the grid. The synapses are at the junctions where wires cross. That works well for small circuits, but becomes progressively less wieldy as the number of neurons increases.

To get around this Dr Srinivasa employs “synaptic time multiplexing”, in which each physical synapse takes on the role of up to 10,000 virtual synapses, pretending to be each, in turn, for 100 billionths of a second. Such a system requires a central clock, to co-ordinate everything. And that clock runs fast. A brain typically operates at between 10Hz and 100Hz. Dr Srinivasa’s chip runs at a megahertz. But this allows every one of its 576 artificial neurons to talk to every other in the same amount of time that this would happen in a natural network of this size.

And natural networks of this size do exist. C. elegans, a tiny nematode worm, is one of the best-studied animals on the planet because its developmental pathway is completely prescriptive. Bar the sex cells, every individual has either 959 cells (if a hermaphrodite) or 1,031 (if male; C. elegans has no pure females). In hermaphrodites 302 of the cells are neurons. In males the number is 381. And the animal has about 5,000 synapses.

Despite this simplicity, no neuromorphic computer has been able to ape the nervous system of C. elegans. To build a machine that could do so would be to advance from journeyman to master in the neuromorphic engineers’ guild. Dr Srinivasa hopes one of his chips will prove to be the necessary masterpiece.

In the meantime, and more practically, he and his team are working with AeroVironment, a firm that builds miniature drones that might, for example, fly around inside a building looking for trouble. One of the team’s chips could provide such drones with a brain that would, say, learn to recognise which rooms the drone had already visited, and maybe whether anything had changed in them. More advanced versions might even take the controls, and fly the drone by themselves.

SEGUE SESTA PARTE

Marco La Rosa ha detto...

SESTA PARTE:

The other SyNAPSE project is run by Dharmendra Modha at IBM’s Almaden laboratory in San Jose. In collaboration with four American universities (Columbia, Cornell, the University of California, Merced and the University of Wisconsin-Madison), he and his team have built a prototype neuromorphic computer that has 256 “integrate-and-fire” neurons—so called because they add up (ie, integrate) their inputs until they reach a threshold, then spit out a signal and reset themselves. In this they are like the neurons in Spikey, though the electronic details are different because a digital memory is used instead of capacitors to record the incoming signals.

Dr Modha’s chip has 262,000 synapses, which, crucially, the neurons can rewire in response to the inputs they receive, just like a real brain. And, also like those in a real brain, the neurons remember their recent activities (which synapses they triggered) and use that knowledge to prune some connections and enhance others during the process of rewiring.

So far, Dr Modha and his team have taught their computer to play Pong, one of the first (and simplest) arcade video games, and also to recognise the numbers zero to nine. In the number-recognition program, when someone writes a number freehand on a touchscreen the neuromorphic chip extracts essential features of the scribble and uses them to guess (usually correctly) what that number is.

http://cdn.static-economist.com/sites/default/files/imagecache/290-width/images/print-edition/D/20130803_STD003_0.jpg

This may seem pretty basic, but it is intended merely as a proof of principle. The next bit of the plan is to scale it up.

One thing that is already known about the intermediate structure of the brain is that it is modular. The neocortex, where most neurons reside and which accounts for three-quarters of the brain’s volume, is made up of lots of columns, each of which contains about 70,000 neurons. Dr Modha plans something similar. He intends to use his chips as the equivalents of cortical columns, connecting them up to produce a computer that is, in this particular at least, truly brainlike. And he is getting there. Indeed, he has simulated a system that has a hundred trillion synapses—about the number in a real brain.

SEGUE SETTIMA PARTE

Marco La Rosa ha detto...

SETTIMA PARTEA:

After such knowledge

There remains, of course, the question of where neuromorphic computing might lead. At the moment, it is primitive. But if it succeeds, it may allow the construction of machines as intelligent as—or even more intelligent than—human beings. Science fiction may thus become science fact.

Moreover, matters may proceed faster than an outside observer, used to the idea that the brain is a black box impenetrable to science, might expect. Money is starting to be thrown at the question. The Human Brain Project has a €1 billion ($1.3 billion) budget over a decade. The BRAIN initiative’s first-year budget is $100m, and neuromorphic computing should do well out of both. And if scale is all that matters, because it really is just a question of linking up enough silicon equivalents of cortical columns and seeing how they prune and strengthen their own internal connections, then an answer could come soon.

Human beings like to think of their brains as more complex than those of lesser beings—and they are. But the main difference known for sure between a human brain and that of an ape or monkey is that it is bigger. It really might, therefore, simply be a question of linking enough appropriate components up and letting them work it out for themselves. And if that works perhaps, as Marvin Minsky, a founder of the field of artificial intelligence put it, they will keep humanity as pets.

Marco La Rosa ha detto...

DAL DOTT. COTELLESSA:

La forza del pensiero. Chiude la lista un esempio dei grandi progressi raggiunti nello studio delle interfacce uomo-macchina: grazie a sensori impiantati nella corteccia cerebrale, per la prima volta una donna paralizzata è riuscita a muovere un arto robotico controllandolo con il pensiero. È successo nel dicembre 2012 a Pittsburgh, in Pennsylvania: secondo gli esperti, sono molti i progetti di questo tipo che potrebbero vedere la luce nei prossimi cinque o sei anni.

Marco La Rosa ha detto...

DAL DOTT. COTELLESSA:

Self-Destructing Microbial Robots Turn Wastewater Into Gold Mine



Since we’ve been on something of a wastewater tear this week, let’s keep the ball rolling with a look at another form of new technology for harvesting renewable energy from the stuff. This one is from a company called Pilus Energy. The company has tweaked bacteria to come up with proprietary energy-harvesting organisms it calls BactoBots™, leading to a new generation of high efficiency microbial fuel cells.

A Next-Generation Microbial Fuel Cell

While other wastewater-to-energy systems involving organisms have exploited the digestive or fermentation pathways, Pilus Energy has focused on the metabolic pathway.

Bacteria create renewable energy from wastewater.

So far, Pilus has released two products, RemdiBot and GalvaniBot. We’re especially interested in GalvaniBot, which forms the heart of what the company calls a next-generation microbial fuel cell. Here’s the connection, as explained by Pilus:

Most bacteria can gain energy by transferring electrons from a low-potential substrate, such as glucose, to a high-potential electron acceptor, such as, for example, molecular oxygen, a process commonly referred to as respiration. In humans, the mitochondria represent the metabolic “furnaces” that perform the same function…Essentially, our organism possesses nearly identical energetic properties of the human mitochondrion. In fact, many scientists believe that human mitochondria have evolved from bacteria

In addition to generating electricity, GalvaniBot reduces hundreds of organic pollutants in wastewater into high value products, namely renewable hydrogen and methane.

That helps to resolve a problem we noted earlier, which is that the energy density of wastewater is quite low compared to other renewable feedstocks. By extracting more high-value products from wastewater, the BactoBot system has the potential to be cost-effective.

The Go-Anywhere Self Destructing Robot

Pilus has come up with a couple of other interesting tweaks to the microbial fuel cell concept. It has protected its proprietary bacteria with a “key” in the form of a non-toxic additive. Without the additive, the BactoBots quickly die off or self-destruct. In addition to forestalling theft of the genetic code, the key helps to prevent the engineered bacteria from drifting into other environments.

Another aspect of the system is its scalability and portability. In addition to use at large, centralized municipal wastewater facilities, the system could prove cost-effective at residential, commercial, and industrial sites of various sizes, as well as schools and other institutions, and public facilities.


Marco La Rosa ha detto...

DAL DOTT. COTELLESSA:

Tesla Working Towards 90 Percent Autonomous Car Within Three Years

http://spectrum.ieee.org/img/2013-Tesla-Model-S-front-1-1379494304965.jpg

Tesla Motors has somehow managed to make a car that's miles ahead of anything else by virtue of its innovative technology. Instead of being content with that, Elon Musk has decided that the next step is to go autonomous. Or at least, mostly autonomous.

As with most of Elon Musk's awesomely crazy ideas, most of what we have at this point is a big idea.

“We should be able to do 90 percent of miles driven within three years,” [Musk] said. Mr Musk would not reveal further details of Tesla’s autonomy project, but said it was “internal development” rather than technology being supplied by another company. “It’s not speculation,” he said.

Musk went on to say that he doesn't believe that fully autonomous cars are quite feasible yet: "It’s incredibly hard to get the last few percent." In other words, getting to 90 percent autonomy takes some level of effort, and getting to 95 percent autonomy might take the same amount of effort as getting to 90 percent. Ditto for 97 percent from 95 percent, and as for complete autonomy, well...

"One person familiar with Google’s efforts said carmakers had been hesitant about adopting the Google technology because of the potential liabilities from accidents involving robot cars. Google would not comment."

Whether or not Google comments, liability is a huge issue with autonomous cars, and nobody wants to be the first company to put one in the hands of a consumer only to shortly thereafter be the first company to be sued if the car has an accident in autonomous mode. The fact is that robot cars could be much better drivers than most humans are, but even if we all accept that, the robot cars (and their makers) will inevitably be blamed whenever something goes wrong. Even if autonomous cars halve the total number of traffic accidents, the headlines (not our headlines) will just as inevitably be about robots getting in lots of accidents.

What we could really use is some company to say something like, "damn the torpedos, full speed ahead" and just go for it. And that's why we're excited about this announcement from Tesla, even if "we want to do this" is a rather long way from "we've done this, and here it is."

Marco La Rosa ha detto...

DAL DOTT. COTELLESSA:

A Collaborative First: Robots Plugging Away in VW Engines

A Collaborative First: Robots Plugging Away in VW EnginesIndustrial motion control reached a significant milestone a few weeks ago. For the first time, Volkswagen put a lightweight robot to work in an engine manufacturing plant. Not just in the same work station, but working with and assisting humans to insert glow plugs into cylinder head drill holes — all without safety barriers or enclosures to protect the humans. The safety is provided by the robot itself. Besides meeting regulatory safety codes, Universal Robot's six-axis robotic arm incorporates a gripper that senses and limits impact forces and pressures that could cause injuries.

Marco La Rosa ha detto...

DA DOTT. COTELLESSA:

Robots or "Co-Bots" are coexisting with humans in manufacturing environments.

The 1980s film “The Terminator” and its sequels are examples of great storytelling. The movie franchise brought into the 20th Century a couple of literary ideas from the Industrial Revolution: fear of world conquest by robotic machines.

Robots or Co-bots

The reality of robotics in the 21st Century is quite different, though. The world manufacturing community is focusing on ways to assimilate robots into the humanized workplace, not replace humans entirely. In contrast to our traditional perception of boxlike, metallic robots from days past, humanoid robots look and act a lot more human. Maybe that’s why they promise to make good coworkers for humans. Tasks can be divided between robots and humans depending on what each is better at doing.http://api.ning.com/files/2xmvS7IAtKq9UC63dDz*GkfHL8LzptqmdcAHYuaZRgFLm9AXqx8awi*tP8KC6XHiFJNaNRYccQIffmcmR*NA2pFBhysyZFvS/ABBdualarmrobot.jpg

Lest human workers feel too threatened by these new “Co-Bot” coworkers, Leila Takayama points out in the May-June 2013 issue of Technology Review that there is really no need for humanoid robots to behave just like humans. Takayama, a research scientist and manager at Willow Garage—a developer of hardware and open source software for personal robotic applications—says we should think of humanoid robots like service dogs, who handle predictable tasks and do not need to understand any words.

An example of the new generation of robotics is Kinova Robotics’ new JACO Research Edition robotic arm, which is a safe robot not needing fenced guarded cells to keep humans safe from injury. The arm is mounted on a standard aluminum extruded support structure that can be affixed to almost any surface. The arm’s gripper consists of three underactuated fingers that can be individually controlled and are designed for maximum flexibility and grip. The fingers adjust to any object, regardless of its shape. The operator can control the arm with a computer or Kinova’s three-axis, seven-button joystick. The operator can use three different modes: translate, rotate and grip.

One of the most publicized humanoid robots is “Baxter,” who recently was showcased at Association for Advancing Automation convention in Chicago. Baxter, manufactured by Rethink Robotics in Boston, is designed to work side by side with humans and simply takes manual instructions and starts working on the task in about five minutes.

SEGUE SECONDA PARTE

Marco La Rosa ha detto...

DA DOTT. COTELLESSA:

SECONDA PARTE

rank Tobe, editor and publisher of the Everything-Robotic blog on The Robot Report website, reports that Rodney Brooks, founder of Baxter manufacturer Rethink Robotics, is focused on developing a robot that works with—not in the place of—human workers. In so doing, he wants to help manufacturers increase efficiency, reduce costs and reduce the need to “offshore” their operations.

Tobe reports that another humanoid robot manufacturer, Denmark-based Universal Robots, has developed its UR5 and UR10 robots, which can handle high-speed pick-and-place tasks and handle various products differently. For example, a UR5 has been used to pick bottles of cream off of one production line and place them onto the packaging line at a Johnson & Johnson facility in Greece. The UR5 handles various types of creams that come down the line and each is positioned differently.

Tobe indicates that soon, co-bots like Baxter and UR5 and UR10 will be part of mainstream automated manufacturing operations. Rethink Robotics expects to produce enough Baxters for sales of 500 in 2013. UR is building at least 100 robots a month, including 25–30 percent for U.S. customers.

Additionally, the May 30, 2012 edition of The Engineer reported that a Spanish technology company, Tecnalia Research & Innovation, is introducing a Japanese robot into European industry that is capable of working alongside people. Hiro, a humanoid robot developed by Kawada Industries, is designed to work alongside humans and is equipped with two human-like hands that can perform tasks deemed too uncomfortable or hazardous for workers.

According to the publication, it is estimated that 60 percent of manufacturers doing final product assembly work will have this type of robot on its production lines within six years.

+American Grippers Inc can help with your End effectors by supplying a complete range of sophisticated automation assembly products that include pneumatic grippers, rotary actuators, thrusters, linear actuator slides, mini slides, robotic tool changers, and overload devices, robotic gripper. All products are available in imperial and metric versions for flexibility of design for a world market.

Our company headquarters is located in Trumbull, CT, and all of our parts are made in the USA. Still, our products are available in imperial and metric versions for flexibility of design in our globalized economy. Our products are used in a variety of manufacturing processes, including assembly, packaging, loading and unloading, and part transfer.

Marco La Rosa ha detto...

DA DOTT. GIUSEPPE COTELLESSA

E’ stato realizzato un nuovo modello di drone Quadcopter. Si tratta di un nuovissimo robot sperimentale in grado di emulare movimenti e gesti sorprendentemente simili all’uomo. Il drone ha infatti la capacità di rispondere e memorizzare impulsi dall’ambiente esterno condizionando i movimenti di conseguenza. Il Quadcopter è in grado di percepire peso, movimento e gravità. Attraverso divertenti esperimenti potete vedere come molto presto sarà possibile comandare e controllare anche con il corpo droni e robot che potranno aiutarci a svolgere attività quotidiane. Prendetevi qualche minuto per vedere l’intero video perché ne vale davvero la pena.

Non è il primo drone quadcopter esistente. Uno dei più noti è, ad esempio, il Parrot AR.Drone completamente governabile da dispositivo mobile. Per ora gli utilizzi che se ne fanno sono solo a livello sperimentale e ludico ma presto li vedremo all’opera in operazioni più complesse.

Indirizzo internet del video e video diretto.

http://thewebmate.com/2013/09/23/robot-drone-quadcopter/#!

Marco La Rosa ha detto...

DA DOTT. GIUSEPPE COTELLESSA

UCSD Engineers 3D-Print Robot For Power Line Inspection



Think about the miles of power lines criss-crossing our modern world. Inspecting them all for signs of wear and tear can be ahttp://www.wireandtubenews.com/wp-content/uploads/2013/08/Cable-Tester.jpgcomplicated and expensive process. That‘s why a group of crafty UC San Diego engineers built SkySweeper, an elegant little 3D printed robot that could make power line inspection a lot simpler.

SkySweeper looks sort of like a pair of scissors slicing its way down a tightrope. It consists of two adjustable clamps fixed to arms that join at a single motor.

UCSD mechanical engineering grad student Nick Morozovsky wanted to streamline power line inspection, so he stripped down his design for the robot. He built the parts with a 3D printer, the same technology currently being used to recreate famous art and assemble organ tissue from stem cells.

Morozovsky said that because he relied on 3D printing and widely available electronics, each SkySweeper unit could cost less than $1,000 when scaled for commercial use. Comparing his invention to the bulky, costly equipment currently used to inspect power lines, Morozovsky said, “This is much simpler.”

Morozovsky and his colleagues at UCSD’s Flow Control & Coordinated Robotics Labs will present their invention later this at a robotics conference in Tokyo.

They‘ll also be one of the hopeful teams entering the Road to Maker Faire Challenge. The winners net $2,500 and the chance to compete in the World Maker Faire in New York.

Marco La Rosa ha detto...

DA DOTT. COTELLESSA

These Magnetic Nanobots Could Carry Drugs Into Your Brain

The robots are coming from INSIDE the blood!

http://www.popsci.com/sites/popsci.com/files/styles/article_image_large/public/images/2013/09/MicroBot.jpg?itok=3iJy9Dyi

Magnetic Microrobots

It's like a magic elevator for medicine.

These tiny cages, each 100 microns long and 40 microns wide, may not look like much, but they are the new semi-trucks of targeted medicine delivery. Developed by a team of Chinese researchers, in conjunction with Swiss and South Korean institutes, the nickel-coated microbots are steered wirelessly by electromagnetic fields. Thanks to that external control, these microbots can carry precious cargo to exactly where the body needs it, in especially to sensitive places like brains or eyes.

Tiny robots swimming through blood for medical purposes are a relatively new phenomena. In 2011, researchers published a paper on miniscule motors that could propel such machines. Other microbots can carry medicine, but their spiral shape and smaller bodies limit how much can carry. Magnetically steered robots inside living animals have also been tested before.

What makes these microbots unique? Size! Zhang Li, a researcher on the project, explains that "a microbot is like a vehicle that ships drugs directly to the affected area. And I want to design a truck, not a car." Larger robots mean more medicine delivered. Human trials of these robots are likely decades away, but the robots have been tested in rabbits and mice.


Marco La Rosa ha detto...

DA DOTT. COTELLESSA

Teaching Robots to Run Safely

Robot_1

Safety standards are being revised and harmonized to keep up with the many advances in robot technology

As robots become more common in a growing number of industries, ensuring that they operate safely is becoming a greater concern. Standards bodies have responded by updating two leading safety standards and harmonizing them so global compliance is more straightforward.

The Robotic Industries Association estimates that some 230,000 robots are now at use in United States factories. That number is increasing at a record pace, with a total of 10,854 robots valued at $679.3 million sold by North American robotics companies
in the first six months of 2013, according to RIA.

Two revised standards form the basis for certifying the safety of robots in production facilities. ANSI/RIA R15.06-2012, developed by the American National Standards Institute and the Robotic Industries Association, has been updated for the first time since 1999.

This standard has also been harmonized with the International ISO 10218:2011 standard for robot manufacturers and integrators, which was also recentlyupdated. The ANSI/RIA 15.06 document addresses a range of safety concerns, beginning with definitions that are unambiguous.

“Control responsibility is a concept that’s difficult to quantify,” says Roberta Nelson Shea, chair of the ANSI RIA R15.06-2012 committee. “It’s now clearly defined, with four levels of reliability.”

Clarifications are not the only reason that the standard will be easier to implement. ANSI/RIA 15.06 and ISO 10218 have commonalities, not differences. That will help drive down compliance costs and make it easier for global companies to produce equipment that can be sold anywhere.

Shea notes that ISO 10218 has two sections: Part 1 for equipment manufacturers and Part 2 for integrators and installers. However, she cautions that it’s the end user who’s responsible for any accidents that occur.

“The user is ultimately responsible for the safety of industrial robot systems, including integration and installation,” she said in a Siemens-sponsored webinar.

That’s because any changes done by the operating company must meet safety requirements. For example, an existing cell that’s moved can remain compliant with the 1999 version of ANSI/RIA R15.06. But if any aspect of the layout or controls are changed, it’s considered a new system that must comply with ANSI/RIA R15.06-2012.

SEGUE SECONDA PARTE

Marco La Rosa ha detto...

DA DOTT. COTELLESSA

SECONDA PARTE

Over the past few years, risk assessment has been one of the big trends in safety. Not surprisingly, it’s a central factor for both documents. They leverage another standard, EN ISO 13849, which provides guidance for determining the level of risk for different subsystems on the robot. Product developers, integrators, installers and users must run through a list of risks and come up with solutions.

“A lot of the decisions on which way you should go with safety functionality are based on risk assessment,” says Scott Krumwiede, Business Development and Safety Manager for RWD Technologies, a Division of General Physics. “With risk assessment, you can set the robot’s performance level to the hazards and the safety circuitry.

Both ANSI RIA R15.06-2012 and ISO 10218 also address installations that have multiple robots. Robot cells have safety issues that go beyond the norm of injuring people. Operators must ensure that robots don’t collide with each other. The documents provide techniques for protecting people and equipment during programming and normal operations.

The standards set risk and safety levels, such as performance level (PL) and safety integrity level (SIL) concepts found in ISO 13849 and IEC 62061, respectively. Companies need to pick one of these levels and use it throughout their certification and validation process, Krumwiede noted.

When robots are operating, stopping them when problems arise is one of the important issues for safety. ISO 10218 addresses a number of emergency stop conditions and protective stop controls.

Emergency stop controls must be placed on every control station in an area where operators have unobstructed access. Protective stops can be handled by either manual or automatic means. Another important parameter is to make sure that the robot stops in a safe position.

“Energy isolation is required, you have to be able to shut off everything — electrical, hydraulic or pneumatic,” Krumwiede says. “You also want to stop in a safe mode, you don’t want a robot that’s carrying a heavy load to drop its payload when the power is stopped.”

The ISO standard also sets parameters for programming robots. Users can be in the work space during programming, when movement is restricted to 250 mm per second. When the operator is standing next to the device during programming, the robot may run only if the operator continually takes action, such as holding down a control button.

Shea notes that all ANSI and ISO standards are voluntary. But European machine directives specify the standards, so they’re effectively required by law for any product sold there. In the U.S., OSHA personnel are trained using ANSI and ISO standards, making the standards the basis of U.S. safety regulations.

Marco La Rosa ha detto...

DA DOTT. COTELLESSA

The human mind. Easy to imagine it as the ultimate in motion controllers. Yet, until recently, actuating motion in an external device by one's thought was only the stuff of science fiction. Now, University of Minnesota researchers have demonstrated people controlling a flying robot with their thoughts. No chip implants, no surgery required. An EEG skull cap fitted with 64 electrodes senses the wearer's thoughts about movement, receiving electric currents from neurons in the brain's motor cortex. Researchers envision their work leading to technologies that help paralyzed patients control wheelchairs, artificial limbs, or other devices.


Marco La Rosa ha detto...

DA DOTT. COTELLESSA

Researchers at Catholic University in Peru are outfitting military-style drones with equipment to make them viable data gatherers in the civilian market. Equipped with microcomputers, GPS trackers, compasses, cameras, and altimeters, the flying laboratories are doing reconnaissance missions over farms and fields, reporting back with agricultural data, as well as taking photos and video of archaeological ruins

Marco La Rosa ha detto...

DA DOTT. COTELLESSA

What is robotics



An accurate definition of Robotics is fairly difficult, since it is a very multi faceted field of study which encompasses a wide range of diverse disciplines and subjects. In order to realise a robot we need mechanics electronics, computer science, artificial intelligence, neuroscience, psychology, logics, mathematics, biology, physiology, linguistics and, in a more indirect way, philosophy, ethics, art and design.

A definition of Robotics is that of Brady: “Robotics is the intelligent connection between perception and action”. In other words a robot is something that reacts in an intelligent way to a sensorial sketch of the environment, naturally with a given purpose. It is a very “fuzzy” definition which almost includes the electric toaster, but it conveys the great generality of the idea of robot.

“I don’t know what a robot is, but I can tell when I see one” Joseph E. Engelberger, President of Unimation Inc.

It is interesting to remark here that there is a very large overlapping between robotics and many consumer and professional electronic equipments. Many products possess a high rate of intelligence(microprocessors, memories) and implement actual embedded intelligent systems. This means that the “artificial” intelligence has already become quite pervasive and that our future will be characterised by an ever increasing integration of different intelligent appliances into a fully intelligent environment, where robots will probably represent the “natural” interface of the intelligent environment with the human beings.

Marco La Rosa ha detto...

DA DOTT. COTELLESSA

Robot Snake Inspects Plant's Pipeline

Robot Snake Inspects Plant's PipelineResearchers from Carnegie Mellon University's Robotics Institute developed a vision-enabled snake robot that slithered through pipes and valves at a nuclear power plant in Austria. The snake, equipped with a video camera and LED lighting, reached areas that could not easily be reached, or would not be safe for humans in the event of radioactive contamination. Researchers were able to obtain clear and well lit-images from pipes and valves from 60 ft out.

Marco La Rosa ha detto...

DA DOTT. COTELLESSA

Tofu is both soft and slippery, which makes it difficult to handle. A robot with a gentle touch picks and places up to 25 blocks per minute of soft tofu, after using a vision system to determine the location and orientation of the blocks. The robot can handle firmer blocks of fried tofu nearly twice as fast.

Marco La Rosa ha detto...

DA DOTT. COTELLESSA

Robots Attack Jellyfish



Robots Attack JellyfishApparently, jellyfish and nuclear power plants don't mix. Sweden's Oskarshamn Unit 3 had to be shut down recently when a large number of jellyfish got into the cooling water intake. Power Magazine says it's the latest in a series of recent disturbances at the facility. Robots are being employed to shred the invaders.

Nuclear Plant Shut Down Due to Jellyfish

O3 At noon on Sunday, Sept. 29, 2013, Oskarshamn Unit 3 (O3) was manually shut down due to a large amount of jellyfish present at the cooling water intake.

Operations management chose to disconnect the facility from the grid as a preventive safety measure rather than risk an automatic shutdown due to insufficient cooling in the condenser. The function of the cooling water in the condenser is to condense the steam exhausted from the turbine generator so that it can be pumped back into the reactor vessel. The cooling water in the condenser has no direct contact with the cooling water in the reactor vessel, but it does act as the heat sink for the reactor coolant.

O3 is a 1,400-MW nuclear plant located 30 kilometers north of the town of Oskarshamn on Sweden’s east coast. The Oskarshamn facility includes three units, which are owned and operated by OKG. Together the plants account for 10% of the total electricity generation in Sweden. The 473-MW O1 was commissioned in 1972 and was Sweden’s first commercial nuclear power unit. The 638-MW O2 began operation in 1974. O3 was put into commercial operation in 1985 and is the world’s largest boiling water reactor.

O3 has suffered recurrent operational disturbances over the last several weeks due to a number of separate independent failures in the facility. On Sept. 1, a control valve in the conventional turbine system caused a shutdown of the plant. This was followed by problems with another control valve in the turbine plant and a failure in the protective equipment for the facility’s transformers. In connection with a disturbance in the internal power supply system that occurred on Sept. 10, a leakage of cooling water was detected in the generator at O3. The leakage was repaired and the facility began supplying electricity to the grid again on Wednesday, Sept. 25. Now the problem with jellyfish has forced the plant offline again.

The company reported that the entire O3 organization has done a very good job dealing with the issues. The fact is that O3 had been operating very reliably throughout 2013 until the recent struggles. It was within about one month of generating more electricity than ever before during a single calendar year. The trouble arose at a most inopportune time as a new winter season approaches, bringing increased energy consumption. Nuclear power is an important component of Sweden’s electricity system and helps maintain stability throughout the grid.

Of course, problems with condenser fouling are not isolated to the plants in Sweden. POWER has reported previously on problems at plants in Scotland and Israel, and issues with jellyfish have been witnessed at plants in Japan and many other countries. Many plants throughout the United States face fouling issues caused by fish and other aquatic life too.

Marco La Rosa ha detto...

DA DOTT. COTELLESSA

Artificial heart to pump human waste into future robots

A new device capable of pumping human waste into the "engine room" of a self-sustaining robot has been created by a group of researchers from Bristol.

Modelled on the human heart, the artificial device incorporates smart materials called shape memory alloys and could be used to deliver human urine to future generations of EcoBot – a robot that can function completely on its own by collecting waste and converting it into electricity.

The device has been tested and the results have been presented today, 8 November, in IOP Publishing's journal Bioinspiration and Biomimetics.

Researchers based at the Bristol Robotics Laboratory – a joint venture between the University of the West of England and University of Bristol – have created four generations of EcoBots in the past 10 years, each of which is powered by electricity-generating microbial fuel cells that employ live microorganisms to digest waste organic matter and generate low-level power.

In the future, it is believed that EcoBots could be deployed as monitors in areas where there may be dangerous levels of pollution, or indeed dangerous predators, so that little human maintenance is needed. It has already been shown that these types of robots can generate their energy from rotten fruit and vegetables, dead flies, waste water, sludge and human urine.

A video of microbial fuel cells, fed on urine, charging a mobile phone can be viewed here - http://www.youtube.com/watch?v=4LTprRQTKAw

Lead author of the study Peter Walters, from the Centre for Fine Print Research, University of the West of England, said: "We speculate that in the future, urine-powered EcoBots could perform environmental monitoring tasks such as measuring temperature, humidity and air quality. A number of EcoBots could also function as a mobile, distributed sensor network.

"In the city environment, they could re-charge using urine from urinals in public lavatories. In rural environments, liquid waste effluent could be collected from farms."

At the moment conventional motor pumps are used to deliver liquid feedstock to the EcoBot's fuel cells; however, they are prone to mechanical failure and blockages.

The new device, which has an internal volume of 24.5 ml, works in a similar fashion to the human heart by compressing the body of the pump and forcing the liquid out. This was achieved using "artificial muscles" made from shape memory alloys – a group of smart materials that are able to 'remember' their original shape.

When heated with an electric current, the artificial muscles compressed a soft region in the centre of the heart-pump causing the fluid to be ejected through an outlet and pumped to a height that would be sufficient to deliver fluid to an EcoBot's fuel cells. The artificial muscles then cooled and returned to their original shape when the electric current was removed, causing the heart-pump to relax and prompting fluid from a reservoir to be drawn in for the next cycle.

A stack of 24 microbial fuel cells fed on urine were able to generate enough electricity to charge a capacitor. The energy stored in the capacitor was then used to start another cycle of pumping from the artificial heart.

"The artificial heartbeat is mechanically simpler than a conventional electric motor-driven pump by virtue of the fact that it employs artificial muscle fibres to create the pumping action, rather than an electric motor, which is by comparison a more complex mechanical assembly," continued Walters.

The group's future research will focus on improving the efficiency of the device, and investigating how it might be incorporated into the next generation of MFC-powered robots.

Marco La Rosa ha detto...

DA DOTT. COTELLESSA

Google robot: ecco gli umanoidi che sfideranno i droni di Amazon e lavoreranno in fabbrica



Di per sè la cosa ha pure senso. Dopo aver lasciato la direzione della divisione Android, il papà dello stesso, Andy Rubin, si è messo a lavorare ai robot. Sempre per Google, ma con uno sguardo proiettato decisamente di più verso il futuro. Andy sta progettando i nuovi robot che Google dovrebbe un giorno produrre e per farlo sembra che Larry Page gli abbia messo a disposizione un bel gruzzoletto, stando almeno alle sette aziende acquistate negli ultimi sei mesi e che secondo il New York Times sarebbero riconducibili a questo progetto.

Purtroppo i robot non sembrano essere destinati a breve a raggiungere il mondo consumer. Per quello Google è al momento concentrata sui Glass e pare che ci dovremo accontentare. Al massimo avremo l’auto che si guida da sola. I robot avranno un’applicazione pratica nel mondo aziendale e potrebbero svolgere un ruolo fondamentale nel ribaltare lo scenario odierno nel mondo dell’elettronica, che vede i nostri dispositivi assemblati per la maggior parte manualmente nelle fabbriche della Foxconn o di altre aziende cinesi. Riportare la produzione in patria è un obiettivo non solo di Google, ma anche di Apple e altre grosse aziende a stelle e strisce.

Potremmo però avere a che fare con i robot personalmente. Uno scenario interessante dell’applicazione di questa tecnologia sarebbe infatti la consegna dei prodotti acquistati tramite il servizio di Google, per ora attivo solo a San Francisco. Si tratterebbe di uno scontro fra titani, con Amazon che progetta già le consegne con i droni. In fondo, la tecnologia per far camminare o guidare i robot per le strade delle nostre città Google già ce l’ha.

La progettazione di questo robot per Andy è una grande sfida. Si tratta di costruire una cosa che ha definito una specie di tergicristallo che ha abbastanza intelligenza da attivarsi da solo quando piove. Se però c’è una persona che può riuscirci, quella è davvero Andy.