Privacy breaches and complaints can often be resolved cooperatively. We usually hear about the large, dramatic, far reaching breaches more so than the smaller ones that get resolved.
The privacy commissioner just released some examples.
In one example, a malfeasant social engineered some information from customer service representatives that enabled the malfeasant to contact customers and try to obtain more information that could be used for fraud. The business investigated, contacted the individuals who may have been compromised, and took steps to reduce the chances of it happening again.
In another situation, a rogue employee took customer information which was used to impersonate the company to collect money from a customer. The business was not very responsive to the customer complaint until the privacy commissioner got involved. In the end the employee was dismised, the customer made whole, and steps were taken to reduce the chances of it happening again.
From a business perspective, it shows the need to take privacy complaints seriously, and deal with them quickly and effectively.
From a consumer perspective, it shows the need to be cautious when you are asked for your information – especially when someone contacts you. And be patient when your service providers take steps to make sure you are who you say you are.
Cross-posted to Slaw.
In 2015 the US FCC took steps to prevent ISPs from discriminating against internet traffic. This is called Net Neutrality, which Wikipedia describes as “…the principle that Internet service providers and governments regulating the Internet should treat all data on the Internet the same, not discriminating or charging differentially by user, content, website, platform, application, type of attached equipment, or mode of communication.”
The gist of the concept is that the owner of the pipes shouldn’t be able to favour the delivery of its own content over content provided by others.
At the risk of oversimplifying this, net neutrality is generally favoured by consumers and content providers, but not so much by ISPs.
In what is seen as a backwards steps for US consumers, the new chair of the FCC has made it clear that he is not a fan of the principle.
For more detail, read this New York Times article titled Trump’s F.C.C. Pick Quickly Targets Net Neutrality Rules and this CNET article titled Meet the man who’ll dismantle net neutrality ‘with a smile’
Cross-posted to Slaw
Included in Trump’s reprehensible executive order “Enhancing Public Safety in the Interior of the United States” was this:
Sec. 14. Privacy Act. Agencies shall, to the extent consistent with applicable law, ensure that their privacy policies exclude persons who are not United States citizens or lawful permanent residents from the protections of the Privacy Act regarding personally identifiable information.
The Privacy Act covers personal information held by US Federal agencies. This would apply, for example, to information collected about Canadians entering the United States.
This should be attracting the wrath of the Canadian privacy commissioner and the Canadian government.
More detail is in this post by Michael Geist and this post on Open Media.
Given this attitude, we should be redoubling efforts to make sure our communications are encrypted.
Conventional wisdom has been that our data is just as safe in the US as Canada given that both countries have limits on privacy when it comes to law enforcement and government ability to dip into our information. But this cavalier attitude puts that into question, and it may be prudent for Canadian entities to keep their data in Canada to the extent possible. Where that isn’t practical, attempts should be taken (and assurances obtained from vendors) to encrypt that the data in a way that the provider doesn’t have access to it.
Cross posted to Slaw
That’s the title of a 25 minute video that is worth watching if you have an interest in where computing is going.
Don’t panic if you have just decided to do more of you business computing in the cloud. That isn’t going away any time soon.
It means that we will see more edge or fog computing. Some of the computation that now happens in the cloud will increasingly happen at the edge of the network. That might be in IOT devices, our phones, cars, or Alexa type devices. Think of it as a return to distributed computing. Peer to peer networks will become more common as well. Such as cars that talk directly to each other to allow them to drive safer near each other.
In part this is because devices are becoming more capable. For example, artificial intelligence now must use the cloud to figure out some queries. Think of Siri or Alexa that sends your queries to the cloud. Hardware and software advances will make it possible to do more of this at the end point – such as directly on your phone. (That might have a side benefit of helping on the privacy front.)
Edge computing is in part being driven by necessity. The sheer number of devices generating data, and the volumes of data they will generate, will be overwhelming. For some applications, the cloud is simply not fast enough or reliable enough. It is one thing if it takes a couple of seconds to get your answer back on the weather forecast. But a self-driving car needs to react instantly to stop when someone steps off a curb in front of it.
The cloud will be where learning occurs, and where much of the data resides, but data curation and decision making will be done at the edge.
Cross-posted to Slaw
January 28 is Data Privacy Day – “an international effort held annually on January 28 to create awareness about the importance of privacy and protecting personal information. ”
The IAPP (International Association of Privacy Professionals) is honouring the day with local “Privacy After Hours” events on Thursday January 26th.
Privacy professionals in London are welcome to attend the event being held at McGinnis Landing restaurant. Harrison Pensa is pleased to provide the appetizers for the event.
You can sign up for the event on the IAPP website. You have to create an IAPP logon ID to register – which is quick and painless to do.
The 2016 Fashion Santa. photo source: yorkdale.com
A Toronto mall and its former “Fashion Santa” are having a snowball fight over the character. The mall hired a new Fashion Santa this year instead of the person who played the role before. The dispute is over who owns the character and name. They even have duelling trademark applications for Fashion Santa.
In the end it comes down to the facts (including whether the individual is an employee or an independent contractor, and who developed the character) and the nature of any agreement that might exist.
While disputes over public characters makes for good press, this kind of dispute is actually not that rare. Disputes often occur between individuals and the entity that hires them over who has rights to intellectual property.
Typically the individual claims they created something before they were hired, or that they are really independent contractors providing a service. The business claims either that it created it on its own, the employee merely had an idea that it then developed, or the employee developed it as part of the employee’s duties. These issues can be difficult to sort out, as the facts are often fluid, and subject to different points of view.
The best and easiest time to sort out ownership issues is at the beginning, and put it in writing. But it may not be on the parties’ minds then. Ownership and rights issues often get controversial only when something becomes successful and money gets involved – such as the publicity and success of Fashion Santa.
Cross-posted to Slaw
The Supreme Court of Canada, in Royal Bank v Trang, made a privacy decision that will bring a sigh of relief to lenders and creditors.
A judgment creditor asked the sheriff to seize and sell a house to satisfy the judgment. To do that, the sheriff needed to know how much was owed on the mortgage on the house. The mortgage lender didn’t have express consent to provide the information, and said PIPEDA prevented it from giving it. Lower courts agreed.
But the SCC took a more practical approach. The issue was whether there was implied consent to release that personal information. The SCC said there was.
They interpreted implied consent in a broader perspective, looking at the entire situation, including the legitimate business interests of other creditors. Financial information is considered to be sensitive personal information, and thus in general faces a higher threshold for implied consent. But in this context, they held that it is a reasonable expectation of a debtor for a mortgage lender to provide a discharge statement to another creditor wanting to enforce its rights against that property.
Cross-posted to Slaw
Big data and privacy was one of the topics discussed at the Canadian IT Law Association conference this week. Some of the issues worth pondering include:
- Privacy principles say one should collect only what you need, and keep only as long as needed. Big data says collect and retain as much as possible in case it is useful.
- Accuracy is a basic privacy principle – but with big data accuracy is being replaced by probability.
- A fundamental privacy notion is informed consent for the use of one’s personal information. How do you have informed consent and control for big data uses when you don’t know what it might be used for or combined with?
- Probability means that the inferences drawn may not always be accurate. How do we deal with that if we as individuals are faced with erroneous inferences about us?
- If based on information that may itself be questionable, the results may be questionable. (The old garbage in, garbage out concept.) It has been proposed that for big data and AI, we might want to add to Asimov’s 3 laws of robotics that it won’t discriminate, and that it will disclose its algorithm.
- If AI reaches conclusions that lead to discriminatory results, is that going to be dealt with by privacy regulators, or human rights regulators, or some combination?
- Should some of this be dealt with by ethical layers on top of privacy principles? Perhaps no go zones for things felt to be improper, such as capturing audio and video without notice, charging to remove or amend information, or re-identifying anonymized information.
Cross-posted to Slaw
There have been many articles written suggesting that lawyers should learn how to code software. This Wolfram Alpha article is a good one, although many of the articles are far more adamant that every lawyer needs to learn how to code. The rationale is that because software will have an increasing effect on how lawyers practice, and who will be competing with us to provide our services, we should learn to code.
So should we learn how to code? For most lawyers, probably not.
I knew how to code before law school, and for me it has been very useful. Since my practice is largely around IT issues, it has helped me understand those issues and discuss them with clients. It has also influenced my drafting style for both contract drafting and the way I communicate with clients.
But the thought that learning how to code will give us a leg up against competitors who are developing or adopting intelligent solutions to replace our services, or will help us develop our own systems to compete or make us more efficient, is flawed. The systems that are going to have the biggest impact are based on artificial intelligence. That is very sophisticated, cutting edge stuff, and learning how to code is not going to help with that. It is something that we need to leave to the experts, or hire experts to do.
Lawyers interested in this can find resources discussing artificial intelligence and where it is headed (such as the artificial lawyer site and twitter feed that posted the Wolfram Alpha article). Looking at where this is headed, and how it might effect the practice of law would be more productive than learning how to code.
Cross posted to Slaw