- CASL, Canada’s anti-spam legislation, has been with us since July 2014. It’s a terrible piece of legislation for many reasons. In July 2017 a private right of action becomes effective that will allow anyone who receives spam as defined by CASL to sue the sender. CASL sets out statutory damages, so the complainant does not have to prove any damages. Class actions will no doubt be launched. The sad part is that breaches of CASL are to a large extent breaches of the very technical requirements of the statute, rather than the sending of what most people would call spam. At some point in 2017 we may see a court decision that ponders CASL’s legality.
- Pipeda, Canada’s general privacy law, has been amended to require mandatory notice to the privacy commissioner and/or possible victims when there is a serious privacy breach. This is on hold pending finalization of the regulations – and may be in effect before the end of 2017.
- Privacy in general will continue to be put under pressure by politicians and law enforcement officials who desire to advance the surveillance state. The good news is that there is continuing pressure being put forth by privacy advocates. A UK court, for example, decided that some recent UK surveillance legislation went too far. The Snowden revelations have spurred most IT businesses to use more effective encryption. Unfortunately, I don’t think it is safe to predict that President Obama will pardon Snowden.
- Canada’s trademark registration process will undergo substantive change in 2018 – some good, some not so good. In 2017 the regulations and processes should be finalized, giving us more detail about how it will work in practice.
- We will hear a lot about security issues around the internet of things, or IOT. IOT devices can be a gateway to mayhem. IOT things include such disparate devices as thermostats, light switches, home appliances, door locks, and baby monitors. The problem is that far too often designers of IOT devices don’t design security into them. That makes it easy for malfeasants to use these devices to break into whatever networks they are connected to.
- Artificial Intelligence, or AI, will continue to creep in everywhere. AI is now employed in many things we use – ranging from google translate to semi-autonomous cars. Voice controlled screen and non-screen interactions – which use AI – are on the rise.
- AI is starting to be used in tools that lawyers use, and for tools that will replace lawyers in some areas. In 2017, we will start to see some major upheavals in the practice of law, and how people get their legal needs met. At some point every lawyer (and knowledge workers in general) will have a holy cow moment when they realize the impact of AI on their profession. AI will make inroads in things like legal research, and contract generation. It will also foster the provision of legal services online by non-lawyers to a vast underserved market that won’t pay lawyers on the current business model. These services may not be quite as good as those provided by lawyers, but consumers will be happy to pay less for what they perceive as good enough. And the quality, breadth, and sophistication of these services will continue to improve as AI improves.
- Another AI issue we will hear about in 2017 is embedded bias and discrimination. AI makes decisions not on hard coded algorithms, but rather learns from real world data and how things react to it. That includes how humans make decisions and respond and react to things. It thus tends to pick up whatever human bias and discrimination exists. That is a useful thing if the purpose is to predict human reactions or outcomes, like an election. But it is a bad thing if the AI makes decisions that directly affect people such as who to hire or promote, who might be criminal suspects, and who belongs on a no-fly list.
- The cloud has finally matured and will be adopted by more businesses in 2017. Most major international players now have data centres in Canada, which helps to raise the comfort level for Canadian businesses. Many CIOs now realize that putting everything in the cloud means that life is easier as a result, as it can make business continuity, scalability, mobility, upgrades, and security easier. Care must be taken to make sure that the right solutions are chosen, and it is being done right – but there are compelling reasons why it can be better than doing it yourself.
- The youngest generation in the workforce is always online, connected, and communicating, and expects their workplace to fit their lifestyle and not the other way around. Firms that embrace that will get the best and the brightest of the rising stars. It used to be that business tech was ahead of consumer tech, but that trend has been reversing for some time. More workers will get frustrated when they can do more with their own devices and apps than their corporate ones. That can lead to business challenges in areas such as security – but these challenges around rogue tech in the workplace have been around for decades.
A ZDNet article entitled Cloud computing: Four reasons why companies are choosing public over private or hybrid clouds makes a case for the value of the public cloud.
- Innovation comes as standard with the public cloud
- Flexibility provides a business advantage
- External providers are the experts in secure provision
- CIOs can direct more attention to business change
This is all good – or mostly good.
The caveat is that the use of the cloud can fail if a business adopts the cloud without thinking it through from the perspectives of mission criticality, security, privacy, and continuity. If a business runs mission critical systems in the cloud, and that system fails, the business could be out of business.
The IT Manager no longer has to consider day to day issues around keeping software and security up to date. But they still have to consider higher level issues.
It is important to understand what the needs are for the situation at hand. A system that is not mission critical, or does not contain sensitive information, for example, would not require as much scrutiny as a system that runs an e-commerce site.
Issues to consider include:
- how mission critical the system is
- what the consequences are of a short term and long term outage
- how confidential or personal the information is in the system
- can the information be encrypted in transit and at rest
- how robust the vendor’s continuity plan is
- the need for the business to have its own continuity plan – such as a local copy of the data
- how robust the vendor’s security is
- does the vendor have third party security validation to accepted standards
- does the vendor’s agreement have provisions that back these issues up with contractual terms and service levels with meaningful remedies
Cross-posted to Slaw
Makerspaces (sometimes called hackerspaces) are community workspaces – generally in the tech and digital arena. Entrepreneurs might use them as workspaces and to collaborate with colleagues. Hobbyists might use their tools to make something. They often put on workshops – typically around tech and equipment – such as 3D printers. They perform a valuable service to foster learning, creativity, and entrepreneurship.
I learned how to use a Raspberry Pi yesterday at a workshop at UnLondon. (Harrison Pensa is a sponsor of UnLondon, and of their recent Explode conference.) The first project was to wire and code (in Python) an app to create a blinking LED. Crude, yes, but a good, quick introduction.
For those not familiar with the Raspberry Pi, its a tiny, inexpensive computer that is almost as powerful as a desktop. Google Raspberry Pi to see hundreds of things people have made with them – including robotics controllers, TV set-top boxes, arcade games, networking equipment, and home automation.
I’m going to make something with mine for my office – perhaps an information display of some kind – but I’m open to suggestions.
Cross-posted to Slaw.
I attended an event last night where Duncan Stewart of Deloitte talked about their TMT predictions for 2016.
It reinforced for me that the future of tech and what it will do for us is potentially awesome. But also at the same time the amount of information that is being collected and stored about each of us is staggering. That creates real privacy challenges, and real possibilities for abuse. And because the information is there, there is a tendency for government and business alike to want to use it.
One scary aspect is that the more we get used to more information being collected about us, the more complacent we get. Our personal freaky line – the line at which we stop using services because we are concerned about privacy issues – moves a little farther away. That is in spite of the fact that the more information there is about us, the more ripe for abuse it is, and the more that we temper or alter our behaviour because we know we are being watched.
Think for a moment about all the information that is increasingly being collected about us.
- Smartphones that know our every move and the most intimate and personal aspects of our lives.
- Intelligent cars that know where we go and how we drive.
- The internet of things where the stuff we own collects information about us.
- Wearable tech that collects information about our fitness, and increasingly our health.
- The trend for information and services to be performed in the cloud rather than locally, and stored in various motherships.
- Big data that functions by saving as much information as possible.
- Artificial intelligence and cognitive learning tools that can turn data into useful information and make inferences based on seemingly unconnected information.
- Blockchain technology that has the potential to record surprising things about us.
On top of all this, it is becoming increasingly harder to understand when our info is staying on our device, when it goes somewhere else, how long it stays there, who has access to it, when it is encrypted, and who has access to the encryption keys.
It is in this context, and the fact that we just don’t have the time to spend to understand and make all the privacy choices that we need to make, that the Privacy Commissioner of Canada last week released a discussion paper titled Consent and privacy: A discussion paper exploring potential enhancements to consent under the Personal Information Protection and Electronic Documents Act
The introduction states in part:
PIPEDA is based on a technologically neutral framework of ten principles, including consent, that were conceived to be flexible enough to work in a variety of environments. However, there is concern that technology and business models have changed so significantly since PIPEDA was drafted as to affect personal information protections and to call into question the feasibility of obtaining meaningful consent.
Indeed, during the Office of the Privacy Commissioner’s (OPC’s) Privacy Priority Setting discussions in 2015, some stakeholders questioned the continued viability of the consent model in an ecosystem of vast, complex information flows and ubiquitous computing. PIPEDA predates technologies such as smart phones and cloud computing, as well as business models predicated on unlimited access to personal information and automated processes. Stakeholders echoed a larger global debate about the role of consent in privacy protection regimes that has gained momentum as advances in big data analytics and the increasing prominence of data collection through the Internet of Things start to pervade our everyday activities.
Apple CEO Tim Cook has taken a very public stand against an FBI request and court order to create a backdoor into the Apple operating system. This arose from the investigation into the San Bernardino mass shooting last December.
Kudos to Tim Cook and Apple for this.
Security and privacy experts continue to point out that backdoors are a bad idea that cause far more harm than good.
See, for example, this ZDNet article from yesterday about a new report saying “European cybersecurity agency ENISA has come down firmly against backdoors and encryption restrictions, arguing they only help criminals and terrorists while harming industry and society.”
Sometimes we get so wrapped up in the specs and quirks of our current technology that we forget how far we have come.
To put it in perspective, consider a smartwatch. There are many ways to measure computer performance – CPU speed, amount of ram, amount of storage memory, network speed, etc. A common way to compare basic performance, though, is by FLOPS, or floating operations per second.
A smartwatch can do somewhere in the range of 3 to 9 gigaflops. To put that in perspective, the Cray-2 supercomputer in 1985 could do about 1.9 gigaflops. You could buy one then for about $17,000,000. It used 200 kilowatts of power (that’s several times the power a typical home electrical system provides), occupied 16 square feet of floor space (if you ignore its separate cooling system) and weighed 5500 pounds. (A pdf brochure with the details is here.) I’m sure no one then thought we would ever strap something like that on our wrists, let alone order one online and have it arrive a couple days later.
Makes one wonder what the next few decades will bring.
Cross-posted to Slaw
In the 1989 movie, Back to the Future Part II they time traveled to October 21, 2015. (The move was produced by Neil Canton – no relation as far as I know.)
Articles abound today comparing the 2015 depicted in the movie to today’s world. While we don’t have flying cars, and hoverboards have not proceeded beyond some proof of concept demos, drones and flatscreens and a few other things are here.
Another prediction that didn’t come true is the quip that the justice system works swiftly in the future now that they’ve abolished all lawyers.
Wearable tech was envisioned, though, which Gartner currently places at just past the “peak of inflated expectations” on its hype cycle. If you believe wearables are just a passing fad or toys, take a look at this article entitled I’m a cyborg now and so are you. And consider that one of the panels at next weeks Canadian IT Law Association Conference is entitled “Key IT Law Issues for Wearable & Mobile Devices.” (I’m moderating that panel.)
Cross-posted to Slaw
The Information Technology and Innovation Foundation has released their analysis of how privacy advocates trigger waves of public fear about new technologies in a recurring “privacy panic cycle.”
The report is an interesting read and makes some valid points. In general, people fear new things more so than things we are familiar with. Like the person who doesn’t fly much being nervous about the flight when statistically the most dangerous part of the journey is the drive to the airport.
While a privacy panic for emerging tech is indeed common, we can’t summarily dismiss that panic as having no basis. The key is to look at it from a principled basis, and compare the risks to existing technology.
New tech may very well have privacy issues that need to be looked at objectively, and should be designed into the tech (called privacy by design).
Even if the privacy fears are overblown, purveyors of the technology need to understand the panic and find a way to deflate the concerns.
Cross-posted to Slaw
Depending on how you define a self driving car – probably sooner than you think.
Sometimes new technology seems to come out of nowhere, but it often creeps up on us. Legal disruptions that new tech spawns often follows the same path – usually a combination of lagging behind new technology, and getting in the way of new technology.
Current advances that come to mind include smart watches, drones, electric cars, and Tesla’s Powerwall.
Take self driving cars for example.
Its not as if we will go directly from a totally human driven car to a totally autonomous car. They will creep up on us. The Google self driving car gets a lot of press, and understandably so, but mainstream auto makers are rolling out these features now. We already have cars with features such as self parking, adaptive cruise control, cross traffic alerts, and lane departure warnings. Over time these will morph from warning systems to taking control for a brief time to driving for longer period of time. Self driving will start on highways before it moves to city driving.
Actually, self driving trucks might become prevalent sooner than self driving cars.
Cross-posted to Slaw.
If you are an Apple fan, April 24 2015 marks the beginning of the smartwatch era – the date the Apple Watch is available. (Preorders start Apr 10th.) Smartwatches have been around for a while, but given the Apple reality distortion field, they will initially sell in large numbers, even though they are the most expensive ones available. The basic Apple watch is functionally the same as the most expensive gold watch edition that starts at $10,000. (Someone said that if you can afford a $10,000 watch, you probably don’t need to know what time it is.)
But there are alternatives, including several Android versions, the Pebble, and the Microsoft Band. Version 2 of several of these are expected soon.
Smartwatches are designed to be an interface to your smartphone. But if you want something that comes at this from a different approach, check out the Neptune – from a Canadian company that takes the intriguing approach of making the device on your wrist the main computer. There are still a few days left to take advantage of their indiegogo campaign.
Personally – as much as I want one – I’m waiting for the upcoming second gen Android versions. But then again that Neptune is rather cool…
Cross posted to Slaw