Desjardins offers 4.3 million members protection from identity theft after data leak
After a data breach affecting 40% of Desjardins bank customers, leaving them open to identity theft, president Guy Cormier announced he’d be offering free protection to all of the credit union’s members.
Desjardins has about 4.3 million individuals and 300 000 businesses as customers. According to La Presse, each and every one of their clienteles will be promised free legal access and compensation for identity theft losses.
The credit union is willing to give up to $50,000 to clients in need.
Cormier assured that access to this protection is automatically given: “no need to call, no need to come to the bank. If you were affected or not by the leak of personal information, you are now protected.”
Desjardins says they will take on responsibilities including filing police reports and contacting government agencies.
This initiative follows a data leak by an “ill-intentioned” employee who collected the data of millions of people. The information they then shared included personal details like addresses, birth dates, and social insurance numbers.
The public outcry that ensued provoked a petition with tens of thousands of signatures. Customers were asking for a change to their SIN’s, which, the petition stated, is the “least the Canadian government could do to help restore some peace of mind to the victims,”
To these beset Desjardin’ customers, the company has promised five free years of credit monitoring with Equifax. To date, few members have registered with Equifax. The protection agency is reportedly offering problems in terms of service times and linguistic accessibility.
Of those 2.7 million affected, only 360 000—13 percent—have filed with Equifax.
Cormier was also able to confirm that there’s been no increase in reports of fraud, or a large exodus of clientele. “I do not want to trivialize identify theft, but in the last few weeks, all the specialists we worked with told us that the proportion of data leaks that results in identity theft is very small.”
Meanwhile, members of the Quebec government are set to hold a parliamentary committee hearing on the data breach. Members of the House of Commons are set to discuss the possibility of new social insurance numbers and for protection against future data leaks.
Scientists are the University of Vermont have created they claim to be “living robots.” The first of their kind, these robots have been created out of living cells making them an entirely new life form according to a recent article in The Independent.
Never before has humanity managed to create “completely biological machines from the ground up”, wrote the research team in a recent paper.
The cells have been derived from frog embryos and turned into a machine that can be programmed to work any way the research team wants.
Such a discovery could allow the tiny “xenobots” to be dispatched throughout a patient’s body to transport medicine or even do environmental work such as retrieving pollution from the ocean. The scientists claim the xenobots even have the ability to regenerate themselves when damaged.
The new hybrids used of a supercomputer for their design and were then later built by biologists. “These are novel living machines,” says Joshua Bongard, the University of Vermont expert who co-led the new research. “They’re neither a traditional robot nor a known species of animal. It’s a new class of artifact: a living, programmable organism.”
The xenobots were built at Tufts University. “We can imagine many useful applications of these living robots that other machines can’t do like searching out nasty compounds or radioactive contamination, gathering micro-plastic in the oceans, travelling in arteries to scrape out plaque,” said co-leader Michael Levin who directs the Centre for Regenerative and Developmental Biology at Tufts University.
Researchers used a supercomputer to create thousands of possible designs for the new life-forms. The scientists used a virtual version of evolution and would assign a task to the computer and then calculate what design might work best for it.
The second part of their research involved microsurgeons bringing the designs to real life. They would take stem cells from the embryos of African frogs, incubate them and then use specialized tools to cut them apart and reassemble them into the design that was created by the computer.
This combination of real organic material being infused to create a life-form that had previously not existed anywhere in nature is a definite first in the field.
The xenobots already have the ability to push pellets around and organize themselves collectively and spontaneously.
Scientists think this is just the beginning and that they will be able to create an even more complex version of the xenobots. The computer simulations so far suggest that it should be possible for future xenobots with a pouch on their body to carry an object, such as entering the body and administering a drug by swimming through the body, for example.
The xenobots can regenerate themselves when damaged. Robots can be sliced almost in two and will be able to fix themselves again. Unlike traditional materials used for robots in the past, xenobots will be entirely biodegradable after they are finished.
There is a danger in all of this however, researchers admit. For example, developments could be programmed in ways that we do not understand and the more complex the systems become, the harder the xenobots behaviour will be to predict.
“If humanity is going to survive into the future, we need to better understand how complex properties, somehow, emerge from simple rules,” said Levin in a statement. “This study is a direct contribution to getting a handle on what people are afraid of, which is unintended consequences,” he said.
Tens of thousands of tweets have flooded in, as it appears Facebook and Instagram—both owned by the Facebook Group—are experiencing vast technical difficulties, loading slowly for many users across the world.
Instagram published a response via Twitter that the tech giant is “aware that some people are currently having trouble accessing Facebook’s family of apps, including Instagram,” and promising to “get things back to normal as quickly as possible.”
WhatsApp is also reportedly experiencing issues.
See if your area is experiencing an outage on the maps below:
For a full, interactive outage map, click here.
Tech ethicists have been sounding the alarm about deepfakes for some time now, and tech think tank Future Advocacy has decided to show just how possible and damaging this tech can be. They’ve released a fake campaign video that shows the two candidates for the coming U.K. election endorsing each other.
Rationally, we know that Jeremy Corbyn and Boris Johnson would not actually endorse each other for the office they both covet, yet our eyes deceive us when we view a video like this. In the hands of Future Advocacy, the video is revealed to be a fake. But this tech could be used by bad actors to disrupt elections all over the world.
Unlike the magician who guards his sleight of hand with care, Future Advocacy reveals how the trick was turned. First, they choose the source video, that clip that they would use to as the base image and movement of the person they are going to fake. Then they parse the words the person most uses, and write the script that sounds like what the person would say. After that, the voice is laid in, and aligned with the movements.
Last month, the U.S. Senate passed the Deepfake Report Act, that “would require the Department of Homeland Security to publish an annual report on the use of deepfake technology that would be required to include an assessment of how both foreign governments and domestic groups are using deepfakes to harm national security.”
The Senate became more concerned about the problem earlier this year when a parody video of Nancy Pelosi was released that made her look drunk. This video was not actually a deep fake, but an actual video slowed down to make her appear sluggish. But it was enough to strike fear into the hearts of legislators.
While the Deepfake Report Act is a step toward trying to understand how the tech is used, what is still needed are the tools on how to detect it. Facebook, ever in the spotlight when it comes to hating on big tech, has dedicated $10 million to the study of deepfakes.
The Pentagon’s Defense Advanced Research Projects Agency (DARPA) has been researching deep fakes, learning first how to make them, so that they can learn how to detect them. The creation of deep fakes is entirely dependent on computer analysis, and as is the detection of the fakes.
It’s a good bet that while Future Advocacy and the Pentagon are working on both raising awareness and figure out how to combat this problem, respectively, those who would sow the seeds of chaos around the world are working just as hard to make them undetectable.
The very concept of reality is under threat. Libel and defamation laws could punish those who would legit make faked campaign videos such as the one conjured by Future Advocacy. But where does that leave us with regard to those videos that go undetected? Even when a video, as the slurred Pelosi one, was proved to be false, the damage was already done. That clip went viral before anyone even raised a question, probably even before Pelosi saw it herself.
Even more recently, friends of the Royals have floated the theory that the infamous photo of Prince Andrew with his 17-year-old accuser, Virginia Roberts Giuffre is “doctored” and that “his fingers look too chubby.”
Giuffre responded by saying “This photo has been verified as an original and it’s been since given to the FBI and they’ve never contested that it’s a fake. I know it’s real. He needs to stop with all of these lame excuses. We’re sick of hearing it. This is a real photo. That’s the very first time I met him.”
As illustrated by this recent example, the implications go beyond fooling voters. Allegations of deep-fakery could be used to cover up crimes or in other cases, falsely implicate people in crimes.
If the goal of those who make deepfakes is to create chaos and confusion in the U.S. and the U.K., they are proving that they are already capable of achieving success. We must maintain our vigilance, good humour, and wariness of everything that flickers across our screens. However, this wariness, this inability to trust trusted sources, is the chaos, confusion, and disorder that bad actors have engendered. When we don’t know who to trust, when we can’t believe our own eyes, when every conceivable source of data and information needs to be interrogated, where does that leave us?
In many ways, humans make snap judgements. Perhaps it’s a remnant of a survival instinct, a fight or flight impulse. But thinking on our feet, making quick determinations, is how we get through life. We do not question everything, because there is simply not enough time in the day. If we find that we are unable to trust new sources of information, we may lock down our views, solidify them, and begin to believe that anything that contradicts them is false.
The hardest part, for each individual, in addressing and dealing with this emerging technology, is not knowing what incoming data to trust. This means that when we read or see something that confirms a view we hold dear, we should question it, antagonize it, investigate it. We need to make sure we know why we believe what we believe, and not assume truth just because it feels right (or wrong) to us. As deepfakes threaten our reality in every aspect from education to crime to democracy, we must remain aware of what is being thrown at us. If not, it’s going to knock us over.
A video has made its way across the web, spreading like wildfire. It features a “robot” created by “Bosstown Dynamics,” that’s a gunslinger performing some incredible shooting drills, all while being knocked around by a couple assholes with hockey sticks.
What’s most impressive, though, is that the bot can seemingly distinguish between living and non-living figures, as it only shoots targets during the drills, even when presented with human targets beating it up.
There’s a catch, though. It’s fake.
The video was posted by Corridor Digital, and features a computer-generated robot from Bosstown Dynamics, a spoof on Boston Dynamics, an actual engineering and robotics company based out of the Massachusetts Institute of Technology.
The most recent Boston Dynamics video, which is actually much more frightening than the spoof in a way, shows a humanoid robot doing parkour tricks, flips, and handstands.
The video caught major traction when it was shared by comedian and podcast host Joe Rogan, who clearly had fallen for the CGI bot video.
Of course, because the internet is the way that it is, the replies to Rogan’s tweet are flooded with users telling Joe that the video is fake, pointing out CGI errors, and even mentioning that the crew at Corridor Digital even made a reveal video that showed how they made the fake video.
While these robots may not be real yet, it will be interesting to see how companies like Boston Dynamics gradually ruin the Earth by developing robot armies that will overthrow humanity.
These Black Mirror-like robots might not be too far around the corner.