These Bosstown Dynamics robot gunslingers are going to overthrow humanity—except they’re not real
A video has made its way across the web, spreading like wildfire. It features a “robot” created by “Bosstown Dynamics,” that’s a gunslinger performing some incredible shooting drills, all while being knocked around by a couple assholes with hockey sticks.
What’s most impressive, though, is that the bot can seemingly distinguish between living and non-living figures, as it only shoots targets during the drills, even when presented with human targets beating it up.
There’s a catch, though. It’s fake.
The video was posted by Corridor Digital, and features a computer-generated robot from Bosstown Dynamics, a spoof on Boston Dynamics, an actual engineering and robotics company based out of the Massachusetts Institute of Technology.
The most recent Boston Dynamics video, which is actually much more frightening than the spoof in a way, shows a humanoid robot doing parkour tricks, flips, and handstands.
The video caught major traction when it was shared by comedian and podcast host Joe Rogan, who clearly had fallen for the CGI bot video.
Of course, because the internet is the way that it is, the replies to Rogan’s tweet are flooded with users telling Joe that the video is fake, pointing out CGI errors, and even mentioning that the crew at Corridor Digital even made a reveal video that showed how they made the fake video.
While these robots may not be real yet, it will be interesting to see how companies like Boston Dynamics gradually ruin the Earth by developing robot armies that will overthrow humanity.
These Black Mirror-like robots might not be too far around the corner.
Scientists are the University of Vermont have created they claim to be “living robots.” The first of their kind, these robots have been created out of living cells making them an entirely new life form according to a recent article in The Independent.
Never before has humanity managed to create “completely biological machines from the ground up”, wrote the research team in a recent paper.
The cells have been derived from frog embryos and turned into a machine that can be programmed to work any way the research team wants.
Such a discovery could allow the tiny “xenobots” to be dispatched throughout a patient’s body to transport medicine or even do environmental work such as retrieving pollution from the ocean. The scientists claim the xenobots even have the ability to regenerate themselves when damaged.
The new hybrids used of a supercomputer for their design and were then later built by biologists. “These are novel living machines,” says Joshua Bongard, the University of Vermont expert who co-led the new research. “They’re neither a traditional robot nor a known species of animal. It’s a new class of artifact: a living, programmable organism.”
The xenobots were built at Tufts University. “We can imagine many useful applications of these living robots that other machines can’t do like searching out nasty compounds or radioactive contamination, gathering micro-plastic in the oceans, travelling in arteries to scrape out plaque,” said co-leader Michael Levin who directs the Centre for Regenerative and Developmental Biology at Tufts University.
Researchers used a supercomputer to create thousands of possible designs for the new life-forms. The scientists used a virtual version of evolution and would assign a task to the computer and then calculate what design might work best for it.
The second part of their research involved microsurgeons bringing the designs to real life. They would take stem cells from the embryos of African frogs, incubate them and then use specialized tools to cut them apart and reassemble them into the design that was created by the computer.
This combination of real organic material being infused to create a life-form that had previously not existed anywhere in nature is a definite first in the field.
The xenobots already have the ability to push pellets around and organize themselves collectively and spontaneously.
Scientists think this is just the beginning and that they will be able to create an even more complex version of the xenobots. The computer simulations so far suggest that it should be possible for future xenobots with a pouch on their body to carry an object, such as entering the body and administering a drug by swimming through the body, for example.
The xenobots can regenerate themselves when damaged. Robots can be sliced almost in two and will be able to fix themselves again. Unlike traditional materials used for robots in the past, xenobots will be entirely biodegradable after they are finished.
There is a danger in all of this however, researchers admit. For example, developments could be programmed in ways that we do not understand and the more complex the systems become, the harder the xenobots behaviour will be to predict.
“If humanity is going to survive into the future, we need to better understand how complex properties, somehow, emerge from simple rules,” said Levin in a statement. “This study is a direct contribution to getting a handle on what people are afraid of, which is unintended consequences,” he said.
Tens of thousands of tweets have flooded in, as it appears Facebook and Instagram—both owned by the Facebook Group—are experiencing vast technical difficulties, loading slowly for many users across the world.
Instagram published a response via Twitter that the tech giant is “aware that some people are currently having trouble accessing Facebook’s family of apps, including Instagram,” and promising to “get things back to normal as quickly as possible.”
WhatsApp is also reportedly experiencing issues.
See if your area is experiencing an outage on the maps below:
For a full, interactive outage map, click here.
Tech ethicists have been sounding the alarm about deepfakes for some time now, and tech think tank Future Advocacy has decided to show just how possible and damaging this tech can be. They’ve released a fake campaign video that shows the two candidates for the coming U.K. election endorsing each other.
Rationally, we know that Jeremy Corbyn and Boris Johnson would not actually endorse each other for the office they both covet, yet our eyes deceive us when we view a video like this. In the hands of Future Advocacy, the video is revealed to be a fake. But this tech could be used by bad actors to disrupt elections all over the world.
Unlike the magician who guards his sleight of hand with care, Future Advocacy reveals how the trick was turned. First, they choose the source video, that clip that they would use to as the base image and movement of the person they are going to fake. Then they parse the words the person most uses, and write the script that sounds like what the person would say. After that, the voice is laid in, and aligned with the movements.
Last month, the U.S. Senate passed the Deepfake Report Act, that “would require the Department of Homeland Security to publish an annual report on the use of deepfake technology that would be required to include an assessment of how both foreign governments and domestic groups are using deepfakes to harm national security.”
The Senate became more concerned about the problem earlier this year when a parody video of Nancy Pelosi was released that made her look drunk. This video was not actually a deep fake, but an actual video slowed down to make her appear sluggish. But it was enough to strike fear into the hearts of legislators.
While the Deepfake Report Act is a step toward trying to understand how the tech is used, what is still needed are the tools on how to detect it. Facebook, ever in the spotlight when it comes to hating on big tech, has dedicated $10 million to the study of deepfakes.
The Pentagon’s Defense Advanced Research Projects Agency (DARPA) has been researching deep fakes, learning first how to make them, so that they can learn how to detect them. The creation of deep fakes is entirely dependent on computer analysis, and as is the detection of the fakes.
It’s a good bet that while Future Advocacy and the Pentagon are working on both raising awareness and figure out how to combat this problem, respectively, those who would sow the seeds of chaos around the world are working just as hard to make them undetectable.
The very concept of reality is under threat. Libel and defamation laws could punish those who would legit make faked campaign videos such as the one conjured by Future Advocacy. But where does that leave us with regard to those videos that go undetected? Even when a video, as the slurred Pelosi one, was proved to be false, the damage was already done. That clip went viral before anyone even raised a question, probably even before Pelosi saw it herself.
Even more recently, friends of the Royals have floated the theory that the infamous photo of Prince Andrew with his 17-year-old accuser, Virginia Roberts Giuffre is “doctored” and that “his fingers look too chubby.”
Giuffre responded by saying “This photo has been verified as an original and it’s been since given to the FBI and they’ve never contested that it’s a fake. I know it’s real. He needs to stop with all of these lame excuses. We’re sick of hearing it. This is a real photo. That’s the very first time I met him.”
As illustrated by this recent example, the implications go beyond fooling voters. Allegations of deep-fakery could be used to cover up crimes or in other cases, falsely implicate people in crimes.
If the goal of those who make deepfakes is to create chaos and confusion in the U.S. and the U.K., they are proving that they are already capable of achieving success. We must maintain our vigilance, good humour, and wariness of everything that flickers across our screens. However, this wariness, this inability to trust trusted sources, is the chaos, confusion, and disorder that bad actors have engendered. When we don’t know who to trust, when we can’t believe our own eyes, when every conceivable source of data and information needs to be interrogated, where does that leave us?
In many ways, humans make snap judgements. Perhaps it’s a remnant of a survival instinct, a fight or flight impulse. But thinking on our feet, making quick determinations, is how we get through life. We do not question everything, because there is simply not enough time in the day. If we find that we are unable to trust new sources of information, we may lock down our views, solidify them, and begin to believe that anything that contradicts them is false.
The hardest part, for each individual, in addressing and dealing with this emerging technology, is not knowing what incoming data to trust. This means that when we read or see something that confirms a view we hold dear, we should question it, antagonize it, investigate it. We need to make sure we know why we believe what we believe, and not assume truth just because it feels right (or wrong) to us. As deepfakes threaten our reality in every aspect from education to crime to democracy, we must remain aware of what is being thrown at us. If not, it’s going to knock us over.
Chinese style surveillance has come to the Americas. Tech firm ZTE is making inroads in selling surveillance tech to localities throughout South America. ZTE is a Chinese tech company that makes phones, surveillance cameras, and other devices, and has been accused of leaving a “backdoor” that enables the Chinese government to effectively spy on users. The firm was banned from importing US software and parts, but the ban was lifted last July, and ZTE paid a hefty fine. In bringing ZTE’s surveillance tech online, governments from Argentina, Venezuela, Ecuador, and Uruguay claim that citizen safety is their foremost concern.
The US is rightfully wary of ZTE and the kind of surveillance tech it’s selling to our southern neighbours. The concern is not just with those governments who employ the software having access to the collected data, but with ZTE holding that information as well. The Chinese social credit system relies on tech companies collecting data and funnelling it to the government where the information is parsed and individuals’ behaviour is recorded. Could the beginnings of social credit systems be underway in South America?
In Venezuela, a smart-card ID “transmits data about cardholders to servers supplied by ZTE and is increasingly linked by the government to subsidized food, health and other social programs.” In Ecuador and Uruguay, thousands of government-controlled surveillance cameras are already online. The US also has a lot of cameras on the streets, but as anyone who watches Law & Order can tell you, they’re not all linked to some centralized database, and half the time they’re not even working.
Feelings of dread accompany reports of China’s social credit system, where personal misdeeds are logged, recorded, and penalized. The system goes beyond western-style credit checks, expanding “that idea to all aspects of life, judging citizens’ behaviour and trustworthiness. Caught jaywalking, don’t pay a court bill, play your music too loud on the train — you could lose certain rights, such as booking a flight or train ticket,” per Wired. The social credit system is currently in beta, but willfully launch in 2020. Already, there have been reports of people being hindered from free travel because of their low social credit score.
The latest South American deal, three years in the works, is the installing of cameras and the implementation of a surveillance contract in Argentina’s crime-ridden northern province Jujuy. According to Reuters, “Security Minister Ekel Meyer said in an interview in San Salvador de Jujuy that residents accepted the watchful eye of the security cameras in exchange for safer streets.” Additionally, “a Chinese official in Buenos Aires [said] the Jujuy project could help China expand its tech footprint in the country, by encouraging other cities to adopt similar technology.” Both of these things are cause for concern.
When citizens give up freedom for safety, they are likely to get neither. Introducing facial recognition software to a surveillance system does more than identify those individuals who are caught on camera committing crimes, it tracks everyone. This is how so many really bad ideas are sold to the public. We’re meant to believe that if we allow ourselves to be tracked, our movements, actions, behaviours, and life choices to be monitored, we will all be better off. A common refrain is that only those who have something to hide, criminals, scoundrels, bad actors, should worry. But what about the rest of us? If we each dig into our personal lives, how good are we really? How much of what we do, that we take for granted, would get us into trouble in a social credit system?
This tracking is not only on an individual level, but on a group level. One of the most successful uses of facial recognition software is in China’s Xinjiang province, where the ethnic and religious minority of Uighurs are tracked as they go about their daily lives. As reported by Paul Mozur for podcast The Daily, many Uighurs are scared to attend worship services because they don’t know how the information of their attendance will be used.
Americans are happy to leave our permanent records behind with formal education, but the promise of this kind of tracking tech is that no bad action, unpaid bill, missed stoplight, or breach of the peace will stay in the past. It makes me think through my own actions and misdeeds, ones that I’ve long since shelved with heaps of other discarded memories, and wonder what of these things would stop my freedom of movement under a widely implemented, Chinese style social credit system. Prohibitively low credit scores can cause enough damage, but what about actually being penalized for non-debt-related infractions?
How I would fare under a social credit system. Would I be able to travel? Take out loans? Get jobs? While the algorithm is both a secret and not entirely uniform, as the system has been rolled out by private companies under the auspices of the government, I decided to take a look at some of the behaviours than can earn a citizen negative social credit, and see where I’d fare. The answer is probably not so great. But are my behaviours really fodder for public shunning?
I had no illusions that I would do well. Because we don’t entirely know what is being tracked, or the effect of those violations, I went with simply those things that I’d done in public that would have been caught on one of these infinite cameras, or that could be accessed by any entity with my social security number.
We could start with my checking account. There’s $17 in it right now. That doesn’t bode well for my financial solvency. I can’t possibly be a productive, consuming member of society with $17 in my bank account. In addition to that, most of my credit is expended, so if there were a crisis of some kind, I’d have to borrow from someone else, or become a burden on “the system.” I don’t own anything. At one point I owned a fridge, but when the time came to move, I couldn’t get the fridge out, so I don’t own that anymore. Which brings me to the fact that I’ve lived in five different apartments since 2000, all rentals, and haven’t paid my rent exactly on time more than once or twice. There were a few rounds in housing court during grad school. This speaks to further instability.
This is only going by the numbers, because the fact that I feel fine about my stats would have no bearing on my social credit score. I pay all of my bills very close to on time. I figure if bills were meant to be paid early, the due date would be earlier. Bills are due on a certain day for a reason, no?
I jaywalk. I know, I know, but I do it anyway, and I’m gonna keep doing it. The best jaywalking is New York jaywalking, where masses of pedestrians all at once decide waiting for the walk sign is for noobs and tourists and we altogether step into crosstown traffic and cross it. On my bike, I rarely wear a helmet. This irks my mother to no end, but I really prefer my hat. For a while, I made a past time of angry walking from midtown to Canal Street. It wasn’t intentional, necessarily, I was just walking, while angry. Any cameras that caught me yelling “move” audibly but under my breath at clumps of clueless, confused, slow-walking tourists would surely have debited my permanent record accordingly.
I don’t always separate every piece of garbage from every other piece of garbage. Mostly I forget to rinse the cat food cans; that can’t be good. In transit, I sometimes hold the subways doors open a few seconds after they want to close to give old people and people with kids extra time to board. I give spare change to the homeless instead of reporting their location to the proper authorities. I know these social niceties would cost me dearly. The number of infractions each of us is guilty of are countless. If tech increases the authorities’ ability to punish us for it, those punishments should decrease, not increase, in severity.
What would really hamper my movements are all the protests I’ve participated in. Going back to the early 2000’s I marched for gay rights, to free Mumia Abu Jamal, and against the Republican National Convention in Philadelphia. In 2003, I marched against the Iraq War, with the New York local transit workers’ union, and Jerry Springer. Together we shut down the 59th Street Bridge and 2nd Avenue, all to make an anti-war point. I’ve written and produced anti-government plays and encouraged other writers to do the same. I’ve written a ton of articles about how everything is bats and we need to maintain and defend our rights, even at the cost of safety. Surely, under a government social credit system that valued safety more than rights, my rights to speak against that very concept would be hindered, along with my freedom to travel freely.
China’s interest in promoting these kinds of surveillance solutions is both about the sale, and about the ideological infiltration. So far, the tech is easy to buy, easy to use, and easy to convince populations to subject themselves to. It becomes that much easier for citizens to forget that their rights are not to be given away. The rights we hold, our inalienable, natural rights, belong to us. Safety is not a right, freedom from surveillance is. The government does not have the right to dock us points because it disapproves of our behaviour, and we need to fight against it by saying “no” at every conceivable opportunity. It doesn’t matter if you think you have nothing to hide, the fact of the matter is, those things that you think are perfectly innocent are just as likely to make you guilty as anything else.