Katina Michael

Jordan Brown interviews Katina Michael, for the film Stare Into The Lights My Pretties.


Katina Michael is a professor in the School of Computing and Information Technology at the University of Wollongong, Australia. She is also the IEEE Technology and Society Magazine editor-in-chief, and IEEE Consumer Electronics Magazine senior editor.

Jordan Brown: Can you please give us a recap of how technology has come from the ENIAC, post-Second-World-War, to where we are today?

Katina Michael: Well, the ENIAC was a 1946 innovation, and is widely considered the “first” automated general-purpose computer. These things were really huge and they covered the wall to ceiling, and large floor spaces dedicated to just a single computer. I think as the transistor was then invented post the vacuum tubes, that it was inevitable that, yes, chipsets would come next, and that lots and lots of these would fill computers and larger rooms, and that miniaturisation was inevitable. These computers got smaller; they didn’t cover walls and floors of buildings, but actually got to the point where you could put them up against a single wall (e.g. mainframes and minicomputers and the like); and then toward the mid-1980s microcomputers entered the scene; and then we got to the point where we could carry these and lug these around with us in the 1990s, and now we can even wear them. Digital Glass for example, Google’s latest product, enables us to have lots of sensors and lots of chips in our digital eye glasses. So what’s the next step? We’ve gone from luggables to wearables. Are bearables the next phase of innovation? How much smaller are we going to get? What’s the next quantum leap for computing and humankind?

Jordan Brown: So where do you think it sits at today in terms of the size? What do you see happening after that? We’ve had this progression from the size and capability, what’s next?

Katina Michael: So the progression has been that we have observed the chip in other objects. We then have the chips in devices that we carry with us, like smartphones, and inevitably perhaps one can ponder that they’ll be injected into our bodies. We’ve already seen this demonstrated in heart pacemakers since the 1960s for instance; and much later the innovation of the Cochlear implant (an implantable device for the deaf). The question is when we take these commodities and apply them to non-traditional areas of application. What you then have are typical biomedical devices applied to everyday contexts for convenience as opposed to need.

So what do I see? I see little tiny cameras in everyday objects, we’ve already been speaking about the Internet of Things—the web of things and people—and these individual objects will come alive once they have a place via IP on the Internet. So you will be able to speak to your fridge; know when there is energy being used in your home; your TV will automatically shut off when you leave the room. So all of these appliances will not only be talking with you, but also with the suppliers, the organisations that you bought these devices from. So you won’t have to worry about warranty cards; the physical lifetime of your device will alert you to the fact that you’ve had this washing machine for two years, it requires service. So our everyday objects will become smart and alive, and we will be interacting with them. So it’s no longer people-to-people communications or people-to-machine, but actually the juxtaposition of this where machines start to talk to people.

Jordan Brown: And so that’s happening today?

Katina Michael: So we’ve had, for example, Auto ID Labs which was an MIT initiative which first began the notion of the Internet of Things, and the Internet of Things really coincided with the rise of Radio-Frequency Identification, and so we’ve taken this proposed way of interlinking RFID with everyday objects and everyday applications, and we’re now at a point with the emergence of Smart Grid infrastructure and Smart Meters where yes, we have the capacity to interact with everyday objects. And so there are distinct relationship links between objects and subjects—and subjects being people, objects being things. So things now have come alive, and people relate to things. People relate to animals, they own animals with implants; people have relationships with other people in social networks, people use machines etc. So I want you to think about the Internet of Things as one big social network—it’s just that your social network might actually have a list of devices you actually own and have purchased as well, as well as your family members, pets, car etc.

Jordan Brown: Can you explain what the Internet of Things is, or what you see it to be?

Katina Michael: The Internet of Things is when a convergence between various network levels- infrastructure, the core, the edge, application devices that users carry—begin to interact with one another and share data that’s collected from the field. This to me has a lot to do with the field of telematics. So in a simple scenario, with fleet management, for example, you may have a fleet of vehicles. You may be transmitting to them information about where to go next or the shortest path to a delivery destination. But these fleets may also be interacting with you as a fleet manager and be telling you “we’re here, we’re currently experiencing unanticipated congestion, and the sensors on the street have alerted us to a better route to take so that we save on fuel and optimise our business processes”. So it’s these constant data flows back and forth from the field to the network operation centre, to the storage centre, to the hubs, in this large complex system of systems. So there’s a lot of embeddedness. There’s a lot of nestedness. There are lots of larger systems interacting with embedded systems within. There are data flows and protocols and handshakes occurring between subjects (as in people and objects as things) and so it’s all about business process optimisation on the back-end. For example, how do you best optimise your business processes, say in order to save on operational expenditure, and ensure you how your money is being expended. Now on the subject side, on the human side, companies will know where their employees are every moment of the day and this will help in business process optimisation, the only thing is people are not machines. People work well under pressure but once pushed over the edge may feel like they’ve been driven too far and so break like elastic bands. You can quash the human spirit because humans are not robots.

Jordan Brown: Do you think that premise that people are machines or even the phrase “Human Resources” is a part of the ideology of the advancement of these things (say the Internet of Things, or technological progress in general)? Do you think there’s an assumption that humans are just a part of the vast machine to be controlled in a further granularised way with all of this data coming back to the people that own and run the networks?

Katina Michael: Humans are a vital part of any business process. I think we can build really smart systems in order to do away with some of the human judgement that takes place in order to reduce risk; so then you have automated data-driven innovative processes that can help you in order to make the right decisions at the right time, for example, during emergency responses—in fires. So you have these sensors in the field, for example, that give you and feed back to you, the right and appropriate information, and automatically may give you the best way forward. My problem with removing humans from business processes however altogether, is that the machine will never know what is inside the brain of the human and so the “what if” exhaustive options used in an automated decision-making process may not be so exhaustive because they’ve been programmed by people with limitedness based on predefined options. But if you bring together a vast array of people and try to respond to a particular decision that is required to be made with a lot of intellect, a lot of cross-disciplinary knowledge, I don’t think you can replicate that on any artificial machine, or with alleged artificial intelligence.

Of course what we’ve seen over the last few decades is this need to do away with large items of operational expenditure and any company—its labour force is usually in excess of 40% of its ongoing operational expenditure. So what do companies do at times of difficulty? They look at their financials and they think “How can I get rid of X percent of my labour force and replace this labour with machines that have a lower expenditure and a higher return on investment because they never get tired, and don’t need to sleep, and don’t need to go on annual leave?” If we’re talking about a manufacturing industry, for example, where robotics has done very well, we have whole factories now being run by robots. Of course these are human-programmed robots where people have fed instructions and creatively produced software to enable these factories to run. But the question is: What do we do with these people who don’t have jobs? Are we reskilling them in any way or just de-skilling them and bringing them out of the labour force?

Jordan Brown: I’m not sure if I should ask about drones on that point?

Katina Michael: Actually, I would rather talk about countries like Bangladesh, Nepal and India, and numerous hotspots in South America. For some time, we have been aware that there are many organisations that perhaps use human resources as if they were machines and that the amount of money that they get paid for their labours is not commensurate anywhere to the retail price of these garments, for example, being distributed and sold in more developed countries. Someone may be working for a dollar a month and I pay for a garment which is in excess of one hundred dollars, and don’t know what kind of effort has gone into that particular garment. The business process optimisation in these organisations and factories are benchmarking in seconds. So if you don’t complete a part of the garment within X number of seconds, then you haven’t done your job appropriately; you’re not working; and you’re at risk of losing your job. So the pressure in these factories and I want to point to perhaps the example of the production of Information Technology components and devices, especially, is overwhelming to the individual manufacturing worker. For example, there have been multiple suicides in large factories which build everyday components for companies like Sony and Apple and many other well-known brands, whereby employees have wished to end their life from the pressure of engaging with difficult employers who drove them so hard that they’ve just thrown themselves off a 10th floor. I mean, how do you cope with 17 suicides in a single organisation over a 2 year period? That’s a high number and quite concentrated when compared to the rest of the world’s statistics on suicides.

Jordan Brown: They put nets up to stop them successfully jumping down. Send them back in.

Katina Michael: That’s it. They did. So that’s even worse. ‘Die slowly.’

Jordan Brown: Yes. I want to go back to the Internet of Things. What I really want to discuss is: who owns that infrastructure? It’s not the consumer—they’re just a passive recipient of the technology existing in that space. I want to discuss possible manipulation. I want to examine the forces impacting on people existing in that space that they might not be aware of, and even if they are aware, there seems to be lots of excuses for ‘no cause for concern.’ I’d like to get your take on that. So if people have very little control over the infrastructure for the Internet of Things on the outset, what’s happening to them?

Katina Michael: As consumers we buy products from stores. We buy computers, mobile phones, and other high-tech gadgetry. When we bring these into our homes and start using them—whether we’re at work, for personal application, at university, or around with our friends—what we’re doing is actually buying in to these new innovations and their externalities. What do I mean by that? We’ve become enslaved by our adoption of these devices and constantly feel like we have to upgrade to the next device. We constantly feel that we have to be using, to be seen to be up with the Joneses, and beyond that. Actually at times, I think people have no choice—we wish to maybe stop this cycle of usage but cannot. All of these devices which are online devices, they can be used wirelessly in any space, any geographic context. These allow us to be located, tracked and identified (because you’ve got passwords, you’ve got a location where you receive signal strength from your nearest Wi-Fi access point. All of these put us onto a grid of sorts—it’s just called the Internet. So we can be tracked, our behaviour and our behavioural patterns of how often we log on, where we log on from, who we are logged on with; whether we are visiting a social networking site for example— all of these behaviours are logged and audited. Now shared, we can provide adequate infrastructure for these services. From the service provider point of view what you’re doing is saying, “The more we know about you, the better we can service you.” But in actual fact what’s being done is your data is being collected (potentially anonymised) but collectively used to identify more of what you’re interested. So your buying habits, where you visit online—we’ll just throw a little bit more about this or that product at you because that’s what you’ve been searching. So what is in that search box, that search engine that you use to look around for various likes and dislikes, and questions that you might have, actually depicts you as a person—it’s a digital DNA footprint which distinguishes you from everyone else.

Jordan Brown: And that’s called data mining?

Katina Michael: This is data analytics, predictive analytics; data mining in the traditional sense. But whereby data mining was very applicable to a data-surveillance world, we are now looking at über views—holistic views of the person about identity, location, video that your watching, images that your uploading, etc—all of these different contexts and sensory data are coming back to give us an über view, in an überveillance society. So we’ve moved on I think from data analytics to über-analytics—being able to define you not just by the transactions that you do at a store, but by absolutely every touch-point that you become engaged with in an online context.

Jordan Brown: And across all the devices you’re using…

Katina Michael: So service providers have an über-context to work with. They can analyse your movements and analyse your behaviours across a vast array of devices. So in the morning you might be checking your email via your desktop but as you exit the house, you have your smartphone clipped on. And so your interactions and the different devices that you use and the different applications that you may interact with on a day-to-day basis, for example, social networking tools, define who you are and define how much activity you have with particular groups in society.

Jordan Brown: So taking that, I’m going to play devil’s advocate for a moment. Imagine that I’m Joe Consumer. How would you convince me that all of these technologies that I’m using and all those processes that are happening, pose a risk to me in any way? Or even us collectively as a society? The line we often hear is “It’s convenient. I love these things. I’m happy to make that trade-off (of mass surveillance).” So if you were to convince someone or to even just point out the risks, what would you say?

Katina Michael: The phrase that’s often used: “I’ve got nothing to hide, so I’ve got nothing to fear”, is something that’s often said by people. To that I always say, “You’ve got nothing to fear and nothing to hide until somebody identifies that you have otherwise.” The other thing that is very pronounced is that as consumers the very information we give over, gives rise to our own exploitation and manipulation. It is like luring consumers to admit to certain weaknesses in buying particular goods or services, because they have stated that they are considering buying product x on Facebook.

The other thing about the current reality is that attacks on consumer privacy are asymmetric. To the vast population, they go on about their daily workings and activities as normal, but to those individuals who become caught up—by accident—in further questioning about their particular physical or online behaviours, there is an asymmetric trade-off and that is, I am one person against perhaps an army of people being accused of X or Y, and all I’ve done is actually searched for a piece of information on the Internet. The asymmetry occurs just like in credit card fraud. You know, where a lot of money is stolen by hackers annually and the credit card companies pass this off as just a liability and another cost to their business (because the interest they’re making on credit cards is so huge. They do not invest in more secure technologies because they can write off the losses). But if you are the victim of a credit card fraud crime and somebody has stolen your credit card, gone and visited an escort service, and then that particular line item appears on your credit card and creates a family furore, then you’re actually on the other side. So it’s an asymmetric relationship. You feel victimised, but the credit card company really doesn’t care and what they do is just reimburse you the amount of money, but what you want to happen is that you want that line item removed, but you can’t get it removed—it’s a service that has been paid for. I’m not saying people should not use credit cards, what I am saying is that we only feel the full force of this current reality if we find ourselves as victims, otherwise we might be oblivious to the goings-on.

Jordan Brown: So that’s the thing. What happens when we have a culture imbedded in the world of the screens, where information flows coming out of the screens are seen to be valuable, and objective, and indisputable—what happens when something like you’ve mentioned above happens and you have the fallout, the impact on relationships, the question of what is trusted, what is real and what isn’t real, all tied up in the perception of ‘computers never lie’?

Katina Michael: The culture of screens is a very misleading culture. Don’t believe everything you see. That’s what we’re taught from a young age. Increasingly, I believe, people do believe what they see. We’ve seen examples of this where police, for example, have been accused of police brutality without provocation but more evidence from smartphones has indicated that the context has been missing and therefore identifies that the police were provoked perhaps into some force. On the other side, on the flipside, we now have police trialling wearable cameras on their bodies in order to decrease complaint handling. However, censorship is still possible through the screens—my point of eye, where I’m looking towards may not be where the crime is taking place. So if I’m a police officer taping a whole scene of an alleged crime, but don’t wish for particular brutality to be shown on a screen, I simply look away. I’m recording this way, the activity is occurring the other way, and I’ve just done censorship. So don’t believe everything you see on screens. It is often not the whole truth.

Jordan Brown: Do you see manipulation happening in screen culture today?

Katina Michael: I think manipulation on the Internet certainly occurs. We call this disinformation. This is nothing new. Propaganda is a historical element of this screen culture—if I tell my message to enough people out there, they’ll believe it. If it comes from enough credible sources, the populace will believe it. If I look up on Wikipedia, a particular entry, “definitely it’s correct” [said sarcastically], I believe it. There are administrators over 1500 now correcting and making edits to Wikipedia. So we believe what we read by nature but who are these administrators? Nobody knows you’re a dog on the Internet, right?

So is there disinformation occurring? Sure. Are people of all walks of life engaging in disinformation? Certainly! It doesn’t just mean online communities, or communities that are related to organisations. Just even the idea of brand awareness is a type of propaganda—“I’m pushing forward a particular brand, I’m advertising, I’m pushing this to your screen.” Every time you go onto Google, for example, and do a search and it’s related to this or that product, I will push more of this product to you. So we are subliminally being provided with messages whether we realise it or not that perhaps sway us to a particular brand, but also sway our intentions and motivations towards X or Y. This can be done by companies, by politicians, by government agencies etc.

Jordan Brown: And do you think that an element of that manipulation is what is driving technological advancement? Say with targeted advertising for example?

Katina Michael: Companies tend to defend their practices as being purely related to marketing. “We elicit this response from you; we use the behavioural tracking and cookies in order to perhaps sell you more of what you want to see. We’re not doing anything bad; we’re just giving you more of what you asked for.” The question however becomes when you start to consider at a much broader level, at a higher level, when all of this advertising and affiliate advertising and affiliate sharing of data and partnership sharing of data becomes used to exploit the consumer. At what point do we say enough is enough? And at what point do we say, yes we would like more. I think people are sick of getting more of the same, but I think we are oblivious to the fact that actually push marketing or push advertising is occurring because we’ve become immune to the practice. If someone was to film a heavy user of a smartphone for a 24 hour period, and then replay back to that user what they looked like while using their smartphone, I am sure that user would be asking questions about their ‘conditioning’ to all consumer electronics.

Jordan Brown: Does that mean that screen culture is creating bubbles around people then too?

Katina Michael: The screen culture makes people look within and not to look outside. So when I’m using my smartphone, and I’m being sent instant messages, and I’m being communicated to—it’s about me. It’s about me and my interactions, and people can say that’s great for personalisation, that’s how I want it—I want to customise my whole life, but in fact, we’re internalising a lot of things. If I think about “me” then most likely I will neglect my children, I will neglect my partner, I will neglect my workplace, because it’s about me and my interactions and the instantaneous communications that take place. There’s always a danger in that—in ignoring your neighbour, in a lack of collective awareness. It’s about insular things, and in so doing, what you are doing is removing your ability to think, removing your ability to pray and be peaceful about things because you’re constantly being bombarded by messages (which may be entirely irrelevant- for example, spam). You’re constantly thinking that these are more urgent than the baby crying in the next room who requires milk or food, and the screen culture just propagates itself. So in order for me to internalise my communications and look down and keep texting and keep messaging back, I also impose the same culture on my children because I just tell them to go look at the TV for a little while longer, go onto the Internet, search some more things. So I am spreading this mimicry of sorts and I can’t stop this cycle because I’m deeply engaged in it. And so when my senses are enveloped and it’s about me and my communications, it’s not about my children, it’s not about my partner, is not about my workplace—it’s about me, and I think there is a great danger in trust within society, in building relationships with one another or a lack of building, when we are concerned about the me.

Jordan Brown: Is there an addictive quality as well?

Katina Michael: I think our use of smartphones and our impact on our daily life by smartphone communications, for example, and the screen culture, is not only addictive but “obsessive-compulsive” addictive, like a cyber-drug of sorts. It’s a health problem and we’ve yet to really master even to begin to ask the right questions. It’s taken us 30 years to realise that fast foods cause obesity. This is a well-known fact now but was unknown for decades. Fast food advertising—even in the sports arena—causes obesity. How long is it going to take us to realise the addictive nature of smartphone usage to our being and our family units? 5 years, 10 years? Is that going to be too late by then, because the mimicry will have been so well entrenched in the next generation? What do you do about that then when all the Millennials are entrapped in a particular way of life, resembling what some would argue is a “zombie-like” state? The thing is you’ve got to do something about it today. We’re not even coming to grips with the obsessive-compulsive disorders of young people suffering from anorexia because their online gaming too long; of young people wetting their pants because they forget to go to the toilet because they are almost at the next level of a multiplayer game with their friends; and of young people being stuck in the room because of the screen culture which has pervaded bedrooms that were traditionally used for study and sleep.

Jordan Brown: People dying in Korea in Internet cafés…

Katina Michael: Yes, people dying in Korea in Internet cafes. Babies being starved to death because their parents are raising online children in Korea, the most networked nation in the world. We’ve got to stop and think about our next phase as a humanity, and our next movements forward as a populace. We got people starving in less developed countries, people being treated like slaves in newly industrialised countries, and here we are in the more developed countries saying “I’ve got Google glass”. Well congratulations. New innovations, augmented reality, and perhaps augmented death at the same time.

Jordan Brown: So what are these companies building then? If it’s not us building these things—we’re just reacting—what’s being built? What are we currently in today?

Katina Michael: We’ve always been, over the last two hundred years or so, since the Industrial Revolution, been stuck in profit maximisation and sales maximisation modes. Organisations generally have one of two goals: they either want to be a profit maximisation firm, or a sales maximisation firm. And if the goal is profit maximisation, it’s about making money, it’s about making your shareholders rich. Do I care about the externalities and the side-effects on the everyday consumer? No, I don’t. Most people you ask who are building new engineering systems don’t think about ethics. It’s their job to build, to create, to push the boundaries, to build new applications that people will find a use value in, but these days we’re not even concerned about the value of the product. What has Google done for example recently in launching their digital glass product? They’ve released Google glasses to 8,000 “explorers”. “Go and explore. Tell us what you do with Google glass. Tell us your new applications. Thrill us with biometric recognition of your friends, and your address books. Augmediate your world so when you look through your digital glasses, only see the advertisements that you want to see on the billboards.”

Jordan Brown: The bubble-

Katina Michael: Again, we’re living in a screen bubble. We’re protected by this forcefield of sorts, again, returning back to the self—it’s about me, not you. And it’s also finding comfort in the creation of inventions that lack positive utility for society.

Jordan Brown: Does this bubble also serve as the greatest surveillance grid ever constructed?

Katina Michael: Large service providers in the world today and we all know who they are, servicing so-called “free” applications, “free” email, “free” uploads of data, hold the key to unlocking who we are as a digital footprint, as a digital DNA. The more we give away freely to these free services, the more they will be used against us, to identify us, to categorise us, to segment us into a particular market type- the elderly, the more secure; the socialite; the worker bee, the teenager etc. However, we are going beyond these typical market segmentations that were created in the 1990s, for example, through the mobile revolution. It is about you—and not about the collective today. We’ve become so smart in our algorithms, in our neural network approaches, in our semantic analysis, in our sentiment analysis,—various types of approaches to analysing what data you publicly disclose voluntarily—that this data is then being used and repurposed to send you out more of the same.

Jordan Brown: Or even data that can be inferred or assumed in aggregate.

Katina Michael: So data that you provide through touch points on your mobile phone, on your laptop, on other devices, even your VoIP sessions through Skype—can be used to infer a great deal about you. And we no longer require with predictive analytics concrete historical evidence to place you in a situational awareness context. If you are at location A, for example, university, then I can infer that most likely you are a student or an academic. Taking this further, if you are in a particular location on a Sunday morning which is not a well-known location to be visiting on a Sunday morning or is a Church for arguments sake in the opposite spectrum, then certain assumptions are being made about you and about your likes and dislikes and about your character traits. No-one is immune.

Jordan Brown: So does society control technology or is it the other way around?

Katina Michael: Society creates new technologies. Initially…

Jordan Brown: Actually, wait. That’s something I’d like to clarify. Do you think it (technological advancement) is driven by these people over here that are creating all of this stuff and everyone else in the bubble is reacting to that; or do you think that it’s us as a society opting for further advancement, for this surveillance grid to perpetuate itself, by the choices that we’re making in that space (the bubble)? Or do you think it’s this force over here (the technology companies) acting on this force over here (the consumers) and it’s all just playing out as a phenomenon? What do you think?

Katina Michael: Every member of society has a role to play in society. All of us are governed by our life-world—that which encompasses the motivations and drivers for how we go about living our life. For example, if I’m an engineer in an engineering community and work for one of the large ICT organisations, my life-world tells me and informs me to say “Create, design, build, collaborate, share knowledge. Strive for that next product innovation and incremental innovation which is better than the one before.” I could be driven by ethical codes of conduct in engineering and design, but I may not be really interested about legal issues or how the media interprets this, or even how consumers might interpret the product that I prototype, patent and release to the world. If I’m a user, am I mindfully adopting new technologies? Or am I just doing this on autopilot because everybody else is doing it? And if I don’t have this application then I become ostracised—my community refuse to contact me because they say I am making it difficult for them if I do not join Facebook.

Jordan Brown: The mobile phone is another example.

Katina Michael: Yes. So the mobile phone can become an inclusive device or an exclusive device whereby it may include you if you are a fellow mobile phone user or it may exclude you if you are not. I’m not the first person to have trialled Facebook and to have lost a whole bunch of university friends when I deactivated my account. Quite interestingly when I reactivated my account two years later, again, people I had lost contact with started to communicate with me again. So new technologies can be used to make the people part of a social network, or they can be used to exclude people by default if they don’t wish to opt into such a new application. So this is quite normal in the new devices. You either upgrade and keep on with the Joneses and with everybody else upgrading and be part of the in-crowd, the clique; or be left behind and live off the grid. However, most people don’t have time for those living off the grid. If you don’t have an email address these days, you’re probably a non-person. There are members of society that for example don’t have a drivers licence and don’t have email accounts, and these people are being left behind; they are fearful of the change that is occurring, and they are being somehow co-opted into having to change. Even if you’re 65 and never driven before—get your licence, it will guarantee your passport, for example, if you want to go overseas. If you’ve never had an email address before and you don’t see the use in having an email address, you might have to if you want to communicate about product updates with company X, otherwise you fall off their radar. So we have a very hard time in dealing with exceptions. If I don’t have an ID card, if I don’t have an email address, if I don’t have a mobile phone, if I don’t have a Facebook account; then the question is do you really exist? Your normality is probably even questioned outright? How do you as a person living off the grid deal with this scenario? Is there a manipulation of sorts by service providers? Have business processes advanced so much that you’ve just got to get on board if you don’t want to be left behind?

Jordan Brown: So it’s the network effect you’re describing?

Katina Michael: What we are doing is empowering various online applications such as social networking applications like Facebook. I read yesterday that 30 million dead people have accounts on Facebook, and however there are close to 1,000,000,000 people now on Facebook (and some people have multiple accounts) but what you have is this “get on board, get on the bandwagon”, the domino effect, the network effect—“make sure you’re there, otherwise you’ll miss out.” And the more people that go on board unquestioningly actually propagate this false conception of the screen culture.

Jordan Brown: So what does that look like for the coming next few years?

Katina Michael: About 20 years ago, I heard about something called the Follow-Me-Number that was published in an International Telecommunications Union (ITU) report. A lot of protocols at the time that used VoIP were being developed, were being discussed: how can we create a Follow-Me-Number for every single person on Earth, sort of like a universal lifetime identifier? So you don’t have to worry about changing mobile phone numbers, or losing one as you’re changing phones, losing a SIM card etc. You have one constant email address that follows you around etc. And to be honest this smacks of person-number systems that were introduced just post the World War II period for social security purposes and rationing, and for giving people money that required it for services like social welfare. A Follow-Me-Number just like a unique DNA is quite eerie when you think about it. Just like you have fingerprints that can’t be changed, in the future your Follow-Me-Number will be unchangeable. We’re currently finding it easy to change credit card numbers when someone has defrauded us, but what will happen when your person number or your Follow-Me-Number is defrauded? What better way to institute a universal lifetime identifier than a microchip implant worn on the body and/or embedded in the body. What is real creepy is that most people would be devastated at having to change their cell phone number today, so in a strange sort of way, this Follow-Me-Number is already here.

Jordan Brown: Okay, and so taking some examples from history, say the infamous rise of the fascist regime in Germany which was heavily dependent on profiling people, is there a risk then based on historical experiences, of recent memories and generations, coupled with the “I’ve got nothing to hide, I’ve got nothing to fear” sentiment? Is there a reasonable concern that the rise of this big data, this vast surveillance society, lends control to a small group of people which could potentially enable really intense abuses?

Katina Michael: Most people who talk about privacy protections and privacy principles have studied history very well. They understand the risk associated with amassing large stores of personal data: your date of birth, your name, syndromes that you may have, whether you’re a life insurance member, whether you drive three cars, have five children, have had previous marriages—all of this data is highly personal. On its own, as individual pieces of information, they may not be telling anyone anything. Collectively however, particular patterns can be used to infer almost anything. If we believe that what happened during World War II is not possible in today’s society then we have a narrow view of history. Anyone, at any place, under any particular government agency control may find themselves on the wrong side. Here is more of that asymmetry I talk about often. I may not fit the latest fashion of thought, how will I fair in that particular community or society at large?

Jordan Brown: In the future?

Katina Michael: It can happen any time. If there’s data that is stored, depending on the particular regime at that particular time, anything is possible. Who would have thought what occurred during World War II would have happened? Even with the limited automated computing that was available at that time, which was pretty clunky based on punch cards? People’s religious beliefs were used during that time to segregate them. The Nazis attempted to remove Jews from their homes to make them completely at their mercy, if not kill them in some aspects as we saw in the gas chambers of Auschwitz. And what we need to understand is while we don’t have modern-day gas chambers that look like gas chambers, you can squeeze the life out of anyone by the knowledge that you have of their personal data.

Jordan Brown: Yes. And the way I see it is that this panopticon has been built—it’s not some grand conspiracy, they’re not all colluding with each other (to make this happen). It’s just the temperament of technological advancement—like how you say, the engineers mindset is “we create”. It’s just happening. The panopticon has been built, and the people in the bubble that are affected by that are reacting. So what’s next? Where is that going

Katina Michael: In April, we saw the devastating bombs that were used to maim and kill three individuals during the Boston Marathon. This is a classic example of where surveillance technology absolutely failed. Initially the wrong two suspects were identified. And this went viral on social media. They didn’t do it. It was asymmetric. The two individuals who were wrongly accused of having planted the bombs were defamed in effect. They were scared, in one college student’s response, to actually leave their household. A few days later, the real suspects were identified but what happened was the asymmetry had already taken place. Two perpetrators were identified early on, it went viral, they were categorised, and it was as if probably to them their whole world was against them. So we could have found ourselves in the same situation—wearing the ‘wrong’ clothes, looking the ‘wrong’ way at that particular location.

What authorities have now begun to question is how much more surveillance they could have applied to find the perpetrators faster and bring them to justice. However, biometrics failed the authorities. Surveillance footage failed the authorities. Until one of the victims who was maimed said: “I saw the perpetrators look at me and this is what they look like.” He was able to give a direct evidence account of what he saw. However, if we say to ourselves “we can get better at this, we can introduce new technologies, we can introduce better biometrics on mobile CCTV cameras, our smartphones can have particular sensors that can be recording”, and if I’m a law enforcement officer, “what we want to do is proactively profile the community to identify potential terrorists in that community”. Well, then, we are going the wrong way. These are anticipatory strategies. This is situational awareness and proactive profiling which means that you are going on potential inferred data, or big data analysis as it’s called, without actually being able to verify that this person has any intent in their head to commit a crime. This is when it gets a little bit scary as a member of the community. If you find yourself in the ‘wrong’ place via smartphone and its GPS enabled chipset, you look ‘wrong’, so your behavioural or your physical attire depicts you as a potential terrorist when you’re not.

Jordan Brown: Like the fact they had a bag…

Katina Michael: You have a bag, you have a hat, you are wearing a hood or concealing your eyes with glasses, etc. What we don’t want is a society where there’s a chilling effect and people actually don’t want to go outside their homes. And I have received numerous emails over the last 10 years of individuals potentially who have been suffering from mental illness but are scared to exit their front door due to surveillance cameras. I’m not propagating that view of the world, people are feeling it. These people are actually feeling this pervasive computing and invasion of their privacy, especially when they live in an apartment that is under constant camera view from an adjoining building. What do you do then? Keep your blinds closed and live in the dark? So yes, most people have nothing to hide in society, but some people feel they have everything to protect.

Jordan Brown: So it’s happening then? The effects—maybe in its infancy—but you can see the chilling effect you’re talking about happening now?

Katina Michael: I think the chilling effect is happening now, and if we do say that some people who are mentally disturbed are disturbed by these additional use of technologies—and were not just talking CCTV cameras, but people feeling like they’ve been implanted with chips for example… then we have to take these concerns seriously, not because they are happening, but because people are feeling triggers towards paranoid capacities. So you could say that “We don’t care about these people, they’re the minority. Let them have their paranoid schizophrenic attacks and their mental illness and it is their choice if they feel they cannot step foot outside their house … that we’re not going to solve their problems, a psychiatrist will, and they will perhaps need more medication but in actual fact and effect, we are creating new technologies that in the near future may have most people concerned about who’s watching, when. Can you imagine what that might feel like? You know, I know there are sensors that track me. I know I’m wearing a smart phone that I can get instant communications with. I feel stressed by going to work because I know my boss knows when I actually arrived at work, when I take my lunch hour, when I visit the bathroom—these ID cards can tell many organisations what is going on with employee clocking on and clocking off during particular sessions of time during the day. But what kind of society are we moving to when we need alarms, bells and whistles, for absolutely every action we take?

Jordan Brown: Is that a panopticon?

Katina Michael: To me personally, what’s a panopticon? I think it’s when I might arrive in an uberveillance society where I can’t even think because the thoughts in my head have actually been inferred by somebody else. So I feel like I’m enslaved, I’m trapped—not within a prison wall, for example but within myself. I can’t have the freedom to be who I want to be, to act like the person I want to act like, and just being myself without feeling someone is scrutinising my every move. And that is a real issue.

Jordan Brown: Can you tell me a little more about technology addiction?

Katina Michael: Most people haven’t realised how over-reliant they’ve become on their smartphone. It’s not just a tool used these days for emergencies. I think many people falsely make themselves believe that they’re low end users of mobiles, when in fact, they’re glued to them. You know, they drop off their children at the daycare centre, and there’s an inclination to pick up the phone and to check the messages that have arrived from 5 minutes ago. You arrive at work, “Oh I gotta check the messages as I’m walking down to my office.” And the excuse often used is that “I’m trying to capture and become more productive during my work life, and so that I don’t have a bank-up of messages when I get to my office or when I get home at night”. But actually what people do is they go home and they filter through even more messages, and then they get up in the middle of the night because they hear the phone buzz, and they’re within an arm length distance of the phone and they’ll be awoken and pick up the handset, look at the phone, respond in the middle of the night and then go back to sleep. We are living a 24-hour cycle these days. The world has become an always on, always connected, online, global world. We haven’t been able to distinguish the boundaries between our home life very well and our work life, despite this quantified self movement saying things are getting better. And the thing is that there is a natural force in writing an email- you send a message, you get one back. It becomes an endless trail.

Many employers know this and so they provide free smartphones and free laptops to their workforce, because they know they will get more productivity out of them during the working week—even the weekend. So we’re expected in this world to somehow carry on with our everyday relationships, as well as be always connected. Somebody submitted to me a short article for IEEE Technology and Society Magazine and said it was a hopeless situation one Sunday morning when he walked into an Indian cafe and this husband and wife arrived shortly after to sit at a nearby table. The husband’s phone rung, he answered it, he was on the phone and while the wife was looking at the husband on the phone, she took out her phone and started interacting with her messages because she felt ignored. Their food arrived, and they ate it while they were texting and talking. At the end, they finished their meals, the husband got off the phone, the wife put her phone back in the purse and they got up and left. What kind of interaction is occurring at that point? None. At least not with one another in the physical space. We are almost stuck in the online world and cannot distinguish between the off-line and online world, even when we are in the presence of other people.

Personally, I know at times when I’ve been working at home on my laptop, trying to finish off an editorial for a pressing deadline, my children will come up and chat to me and I respond to them, and they say “Mum, you’re not listening mum—close the screen.” And I say, “Five minutes, I’m almost done, just give me five minutes, I know you’re really hungry.” They come back in five minutes. “Mum, we want to eat”. And I say to the kids, “I’m sorry I’ll be with you in five minutes.” In effect, something like 20 minutes has gone by. By that time the children are so hungry that they’ll come over to the keyboard and start pressing the keys in order to make me make a mistake so that I am forced to get off the computer. That reality for that split second that I’m engaged in an online activity, or even on a computer—it doesn’t have to be online—is in a virtual space. For example, my head is in that editorial. My children are at my side, but I can’t distinguish between the two. And the more imminent is that which I’m engaging with on the screen because it has my full attention than that which is in the physical space. That is a dilemma that is I think is pervading most homes these days, and somehow families are keeping up with taking the kids (if they have kids by the way), taking the kids to sports, to other afternoon activities, in amongst this jostling of time between the Internet, the smart phone, and a laptop computer.

Jordan Brown: So that’s it—the disconnection from reality.

Katina Michael: That is it. There is a disconnect from the reality in front of you. So I can be looking at you in the eyes, but my mind is still engaged in that practice and if we keep propagating this to newer innovations that continue to draw us away from the physical (e.g. augmented reality, e.g. drone applications)…

Jordan Brown: Or even just exacerbating the bubble?

Katina Michael: Yes, we are actually exacerbating the bubble. So if we propagate this culture, this screen culture, this online time, this not recognising the imminent physical people that are around you, then we’re just going to get more and more lost in an online space which is really somewhat unreal. What happens online sometimes with regards to relationships at least may not be as real as we think they are. Sending someone a virtual hug, for example, is not exactly like having a real hug, a real embrace. But we seem to be filling up our world with status reports, status updates, Twitter messages—I mean, I read yesterday that there is a tweet alert for Huggies nappies so you figure out when your baby has wet his/her. Do I need a tweet to understand when my child… Am I so disconnected from my child that I need to get a tweet on my handset? I think there are two reasons that this application has been introduced. The first is that parents are so engaged with online activity that they forget that the child has not been changed in several hours, and the child gets nappy rash. The second reason why this happens is because parents potentially and carers have lost touch with the physical, they don’t want to touch the child or remove the nappy to see or to smell—so our senses are being dulled down and replaced by tweets. This is really linked to our ability to recollect and to prioritise what is important.

The next thing that’s going to happen is that we’ll have even more alarm clocks. Some of us already have an ergonomic alarm warning that reminds us we need to get up, stretch our legs and move away from our computers set between 7 o’clock in the morning and 7 o’clock at night. But there are still many who eat their breakfast, their lunch, their dinner, in front of the computer. Those with home offices can sometimes suffer greatly as there are limited disruptions from peers besides an email, or telephone call. So we’re like stuck in the old mines. You know, you go down into the pit, it is dark when you wake up, you start typing, and you stay there all day. You want to go home and it is dark still. Really, have we got better living conditions than they did back in the times of the mines, those really dirty mines with bad working conditions? Of course we’re allegedly better off these days because of the clear air we’re breathing in our offices but we are still to some degree stuck in the mine mentality where you wake up, it’s dark, in fact sometimes we don’t even go to sleep! I shouldn’t even be using that expression, “we wake up” because many professional workers are now always on call, always connected, always replying, always sending messages back and forth, so day and night is even difficult to delineate.

Jordan Brown: It’s a strange thing being disconnected.

Katina Michael: Some people have admitted that they never disconnect and that they find long plane rides especially difficult.

Jordan Brown: So people are further removed from the actual happenings behind the interfaces (of screen culture)… so if most people just pick up a phone, make a call and there’s no real knowledge of what is happening behind that screen to make it possible—the vast wired and wireless infrastructures, the programming of the phone, the interfaces—as screen culture perpetuates itself and as the user becomes more removed from those processes, is there a loss of understanding as to how those infrastructures work and the risks that presents?

Katina Michael: We’re often told as consumers not worry about the black box. The black box is the inner workings of a particular network, of a particular application: how it works, how it’s built from basic principles, etc. For example, today people have reusable software. They don’t need to know how to program. They can get a few chunks of code from here and there. They can have some level of work experience, bring reusable software together to do things that seemingly work but with little knowledge of the inner workings of a single module. This is a way of building new systems. The technical things are not for everyone. Don’t worry about what things are going on. Don’t worry that a call can be carried from A to B and go through about 15 Internet hops between locations. Sure, data can be intercepted, but don’t worry about that—who wants to read your email? So this knowledge and approach to simplicity and to creative design and critical making is this hacktivist kinds of hackathons where people come together, you have these crazy ideas, you trial them out, and you get an end-to-end process going. Every person is like an individual unit in that building of a new prototype or a new application—they don’t need to know what the next unit is doing.

But there’s a problem with that kind of approach in that you may know your own particular area very well, but not know what’s occurring in the next phase of the development of that product. It’s like asking a professor who has built a small component—a scientist who has built a small component of an implant for the heart for example—to describe their own component, and they can do that really well, but then ask them to describe a little bit further out, and they say “Sorry, that’s not my area, I can’t really tell you how that was built.” This is a problem because we don’t realise what goes into potential wireless interceptions, potential jurisdictional issues between data storage versus data ownership versus data sovereignty versus requiring compliance with particular laws. People almost don’t care about these fundamentals these days. And if they do, they’re tactfully placing these data storage centres in places that will provide them with the liberty of accessing that information.

So the companies have a major contribution here on ensuring that they set up systems and networks—not just for their own good, but for the good of the community at large. And not to save money in their pocket by storing documents and data offshore which contains personal information of citizens. But I know what’s occurring, it has to do with money again, it’s got to do with profit maximisation, and it also has to do with becoming elusive to jurisdictional issues and legislative issues. So if I want to continually evade this jurisdiction based on that particular act, what I do is just put my data centre in a place where there are no laws against how I store things and for how long and the physical lifetime etc. And this opens up individuals to abuses. This is a big problem. Governments are continually cutting public-sector roles and we’ve seen this at the state level over the last 12 months especially with the shift to cloud computing practices.

Jordan Brown: And then the users are deferring that knowledge and responsibility to people that don’t necessarily have their interests at heart?

Katina Michael: Service providers provide terms and agreements that they very well know that users will not read. We had Google, for example, going and squashing their X number of different applications into a single privacy policy statement. Now, that can work in two ways. The argument that Google provide is that of simplification when in fact the reality is that it will decreases Google’s liability to particular attacks on particular individual’s datasets. So, by removing liability as a service provider, the onus goes back on the user, and the users are not equipped to deal with any breaches in their privacy or security.

Jordan Brown: And may not even know that such breaches or liabilities exist?

Katina Michael: Yes, most users don’t know for example that they’ve been hacked, they cannot distinguish between sites that are real and phishing sites, and are not educated, are not cyber aware even about virus protection because the hacking attempts and the breaches in security and interception are so advanced these days. Consider Raytheon’s RIOT software that can check whether Joe Bloggs has logged in and checked into FourSquare and looked at whether they’re at the gym at 6 o’clock every morning or are doing different things during the day. And that’s when we get to individual targeting. And I can, if I am empowered by this knowledge of looking at your personal journey and tracking through the day, then I can have some influence over you, because I know about your movements. And it doesn’t mean that you’ve done anything wrong.

Jordan Brown: So that leads into one of the big questions then: it’s progress for whom? So as you’ve mentioned, if someone has power over you and that’s playing itself out, say for example if someone runs the Raytheon software that can potentially be watching a lot of people, we have an ‘us versus them’ dichotomy. It is progress for whom then? Who benefits?

Katina Michael: I think about this in terms of the poverty cycle—the rich getting richer and the poor getting poorer. And the poor get poorer because they’re stuck in that rut and they give birth to children and the children are brought up in the same environment. And unless something magical happens they will continue to be in that environment and stuck in that cycle, that endless cycle. The same thing happens when I’m stuck in an endless life of upgrades, whether that has to do with computing, whether that has to do with any particular application that I might buy into. And so those people who are building the applications who know the inner working of the applications they’re building and understand how the infrastructure works are more empowered than those people that are allegedly adopting voluntarily the products that are being sold to them. A lot of my students, for example, and I found this out early on, would work a whole week to pay for their mobile phones in the early days when it was particularly expensive. I used to ask, “Why are you so tired coming into my lecture theatre?” And I’m thinking, “You mean, you’ve worked all week to pay off your mobile phone?” There was something wrong about that model. I told them I’d rather you have no laptop for your honours project, I’d rather you not buy into a mobile phone, and come to my class awake so you can learn something new or at least participate in the dialogue and provide an opinion for me with the rest of the class and share it. So, it’s almost like we’re stuck in a cycle that’s not called the poverty cycle—I don’t know what you want to call it, “the enslaved cycle of ICT”… I don’t know what you want to call it, but I’ve got to have this and I’ve got to have that, and I’ve got to have the next thing, and if I don’t become a member of Facebook and I don’t start geotagging—I’ve missed out… the opportunity cost is too great. However, what are the trade-offs here? I’m more worried about status updates on Facebook than I am about living my life. And this is the problem. I forget about living. I’m just doing what is expected to be done. Replying to that email, replying to that Facebook wall post… where’s the common sense thinking in all this gone?

Jordan Brown: And in the meantime, those small groups of companies are further closing in on their influence of the people who are taking on those technological advancements, those developments?

Katina Michael: Of course, service providers become more and more empowered with the more and more personal data they gather. What you don’t want is churn. Churn is when an individual user goes from one application to another. You want your user to be presented with stickiness drivers—this is a technical term in customer relationship management, so that your user, your customer comes back to your portal, interacts some more, gives away more—this is the whole business model of customer relations management. Provide enough stickiness drivers, they come back and they provide more, they disclose more. And how can we capitalise on this social ensemble? On this information disclosure? What can we do? Let’s analyse it then. Let’s analyse what they’re talking about. Let’s analyse what they feel about Brand X or Y, and if they feel badly about Brand X, let’s employ the right strategies to counter that feeling. So are we being manipulated? Of course we are. And by the very data we disclose. This is the problem—we don’t realise we’re at the beginning and end of that cycle. We provide the information, someone analyses it and it’s fed back to us and we eat it. So we might not think we’re being manipulated, but in actual fact the whole idea of customer relationship management is about this cycle. It’s about a stickiness driver and preventing churn. How Facebook, for example, can have more users than G+, but these guys are not silly. I mean, at the highest level, organisations that have the largest market share, if brought together in kind of a sharing and merging relationship—imagine, for example, Facebook, Google and a number of other organisations like Twitter, decide to share their user data and they profile individuals. We have to realise that anything is possible and these tech giants will continue to push the envelope.

Jordan Brown: So what happens then when you’ve got these companies that are individual entities? What happens if someone, say the intelligence organisations, say the National Security Agency in the United States comes along, and collects all of those vast data stores from the one’s you mentioned: Google, Facebook, Twitter and others?

Katina Michael: We are being sold that we have transparency, at least outwardly, of the number of requests that are made to for example Google, of a particular user’s content or metadata. So for example, Google publishes quarterly the number of requests they get from law enforcement agencies. Google have also stated that if the law enforcement request comes in for a heinous crime, for example, a murder or a rape or what have you, that they will not tell the user that they are under surveillance or that their data has been provided or will be provided to the law enforcement authorities. This is quite different from say a secret intelligence organisation that may wish to investigate individuals. We don’t have that transparency. We should have transparency. Why should these secret intelligence organisations be exempt from a warrant process? And this is where a number of Acts in different jurisdictions really don’t hold up to the mark and what I’m afraid of in the next 10 years is that we dilute these warrant processes and have warrantless monitoring. Just like some malls have got particular equipment to track users and their customers through shopping malls; how long they’ve stood in front of a window and pondered about walking in and then made a transaction within a particular store. So although at the moment this data is being gathered anonymously in the shopping mall context, the question is what will happen when we start to dilute privacy principles, privacy Acts, and say well, if these private companies are surveilling others and we have Raytheon, for example, producing products that are able to track people behaviourally, using for example check-in points and check-out points, then why not just leave it open to anybody? And we can make data-driven innovations from this. We can provide shoppers with better quality experiences through shopping malls—there’s always an excuse for why to dilute privacy. There’s always an excuse to strengthen security.

Jordan Brown: Do you think that’s really been exacerbated in the “post 9/11 world”, where terrorism is a buzzword used to dilute those privacy principles, and to shift the balance of power further towards these secret intelligence organisations

Katina Michael: I think greater visibility was always on the cards. Being able to access data without warrant processes despite these age-old privacy legislation enactments and surveillance device and listening Acts, and whole a gambit of telecommunications data interception Acts and so forth—it was always on the cards. Things move faster and easier when there are no security roadblocks, when I access anything I want as I want it, wherever I am. And we’ve seen this starting to dilute slowly since the inception of geographic information systems, census data on CD, customised to your needs—you know, ring the Australian Bureau of Statistics and tell us what you require, provide for us, for example, an Australian Business Register and identify businesses at a collection district level. We’ve now got satellite imagery we can purchase as tiles. This was available 15 years ago when I was in industry. We could provide this and overlay and register our own images and our own photographs—a bit like Google Maps for private organisations. This has been an ongoing process. Let me create a Google Maps—what a great idea. I can map every administrative boundary in the world. I can map every street location. I can look at what we have topographically on our Earth. Isn’t it a fantastic idea? But when I then go to the next stage and let me go into different cities and let’s start photographing every cadastre plot, and let’s go and do more. You know, if you’ve got a photo of somebody’s home and you want to upload it, hey, upload it under Google Maps. Isn’t it nice to have the visibility of visiting a place before you’ve gone there of what it looks like—I’ll never get lost again. Navigation, fantastic for creative industries and new services, fantastic for open innovation, but then where are we going next? Let’s use Google glasses and let’s not just take a Street view, let’s go into the house, let’s go into the plot, let’s record 24×7 and upload that up onto the Internet. What we’re being asked to be is drones. We’re “manned drones”- not “unmanned drones”—we’re manned drones. And I I can tell you that in the near future, what we will have is people being paid to be drone-like recording devices, where they walk up and down malls, they walk up and down public streets capturing visual evidence of passers-by as they go about their private business. At what point are we going to say we should not be uploading this data to Google maps? We should not be videoing everything in sight, recording it, and uploading as if we own it or I own your image, or Google owns it because it’s on Google maps? At what point do we say enough’s enough and we stop even surveilling one another?

Jordan Brown: Makes me think of the case in Britain where people drove out Google Street View with pitchforks…

Katina Michael: Yes. They got Google out quick smart because they ambushed the vehicle and threatened to smash it! But what’s worse is if we’re going to be recording everything we see. Imagine, I’m recording you recording me right now. And that’s okay, if we’ve got consent to interact with one another that way, but there is no way I’m going to get everyone’s consent as I walk down the street. And some people may be having a bad day—you’re entitled to have a bad day. If you don’t take it out on anyone else for example, you may not be feeling well, you may be crying, you may be suffering, you may have had a relationship breakdown. Do you want that captured on video? That private moment as you’re walking down the street—you’ve just been given the news that your child is about to die in a hospital. Or you’ve just been told by your husband, I’m sorry I don’t love you anymore. Is that what we want to capture—all those bad moments? We’ve got to get serious and get real, because life is not hunky-dory. Life is not always smiling, and like we see on those Google Glass promotions—the airbrushed look, you know, at 6 o’clock in the morning, I know what my hair looks like. I know my kids are screaming for food. Do I want that publicised on television? No. Do I want that publicised on the Internet? Of course not. If I want to go and get my mail from my mailbox with my pyjamas on and my robe, I should be able to do that without feeling “Oh, should I be dressed like this? Should I brush my hair before I go outside?” You know, Sunday mornings for example. And what we’re doing is we’re about to say “Hey, that’s okay, let’s pervade everyone’s life. Let’s not care. Let’s see where we’re going to go with breaking down everyone’s privacy. Let’s not look back—look forward, advance.” And this promise is a fake promise because of the other stuff that I mentioned a while ago happens. We have struggles, we have challenges, we have crises in our life. We don’t want to be replaying those over and over again.

Jordan Brown: Can I ask again of the person that would say “I’ve got nothing to hide, nothing to fear”… How do you persuade them given that situation?

Katina Michael: I should just stay outside their home and start capturing their every move as they interact in their front lawn, their back lawn, anywhere I can see from the front of their yard, and then what I should do is get in my car, put a GPS device on theirs covertly and follow them down the street. And then I should get out at their workplace and say “Hi, it’s me again. I’m wearing the camera. I’m recording you. Don’t worry, I won’t put it up on the Internet today.” And then I should follow them home and then see how they feel the next day when I do the same thing. And the day after that, and the day after that… And I think they’ll get really sick of me really quick.

Jordan Brown: So is it only then because a lot of those processes perhaps aren’t so close to that person, say with pervasive CCTV doing just that? Is it because it’s not part of that person’s awareness potentially, that they may feel that “it’s not a problem, it doesn’t worry me”?

Katina Michael: Most people who go about their everyday life are oblivious to CCTV cameras—even mobile CCTV now on police cars. And what that’s called is the novelty effect—it wears off. So if something is new, I look up and I think “Oh, it’s new, it’s invaded my space”, just like when telegraphs were introduced and people saw terrestrial lines that carried voice calls: “Wow, what are these things?”, you know? We see windmills today and we think: “Oh wow, a windmill”, or we see other infrastructure and we think “Aren’t those base stations at the top of the building looking ugly? Haven’t they destroyed the landscape?” So we do notice these things initially, but we become oblivious to them over time. I don’t notice base stations any more and I used to work very closely with where base stations were located.

Jordan Brown: For mobile phones?

Katina Michael: Yes, for mobile phones. I don’t notice CCTV cameras as much as I used to, they’ve sort of become transparent to the industry design of most buildings. They now have an aesthetic quality about them. I notice that children notice them, because their world is new, everything is new to a child, as they go to a mall for the first time, they ask the questions: “What’s that?” But the novelty effect wears off and adults and with that wearing off we become immune perhaps, and we forget to question what is going on. It is like being stuck in a fog, you cannot see all around you, and you hope for that car that you’re tailing with the blinkers on, is headed the right way… otherwise it is the blind leading the blind…

Jordan Brown: So where are we headed then?

Katina Michael: So why are we headed on this trajectory? Where are we headed? Why is this happening? For a long time when I was studying ICT in my undergraduate years, I used to study tech-evangelists and this whole idea of technology evangelists was striking to me. Who are these guys with job descriptions called tech-evangelists? What was their role? And I remember being at a conference in Sydney of all places when I received a business card that said “I’m a tech evangelist” before the dot.com era. More recently, I looked at a job title from IBM that had the descriptor “chief storyteller.” Oh yes, now “what do you do for a living Sir?” | “I am a storyteller.” And that storyteller was similar to a tech-evangelist. They sat between applications development and solutions architecture. So I’m a storyteller. I tell you stories about how you can harness these products for your business.

Jordan Brown: So it’s like a spin doctor? Marketing?

Katina Michael: Yes. So we’ve created organisational positions- if you want to talk about manipulation, a tech-evangelist is probably a great manipulator and wants you to buy a particular brand and wants you to think a particular way, possess a particular ideology but so does the storyteller. Stories and metaphor can evoke huge reactions in individuals. Now, the question is: who is proposing these new ways forward? Of course we can look at the patent database and claim that these individuals who have over a hundred patents each in these particular areas whether it’s digital glass or any other innovations, smartphones, or wireless technologies- they’re the ones driving innovation. But in actual fact when we start to theorise, and say who are those thought leaders? Who are those people in the think-tanks? Do they have diverse backgrounds? Are they representing me as I should be represented? And when you start to dig a little deeper, it’s really a very small number of people that are driving these new innovations either by accident or by conscious decision making. For example, Facebook. You know, it was supposedly an accident, and it took off really well. I’m sure everything that has happened since the accident, since that coincidence, has not been an accident- it has been very deliberate in strategy. But I also believe that these very successful companies are co-opted by various government agencies to their own ends. Private business must always be within the grasp of government, otherwise the government does not have the ability to provide “security” to its citizens. And this is where the paradox is- for a government to claim that it has “national security” as a core interest, it must either have some control over private enterprise, or enforce “a watering down of company security profiles”. There is a symbiosis between government and private enterprise for this very reason.

Jordan Brown: Is the idea of “storytellers” like a euphemism for advertising itself?

Katina Michael: Yes, and application developers, and business developers- they all develop ‘things’. The question is whether we let ourselves believe what is being proposed by the futurists, or whether we say, “Hey, that sounds really dumb. I don’t want to live in a society like that. I don’t want my kids being raised in a society like that. I don’t want to live forever”… or whatever is the latest high-tech fad.

Jordan Brown: And that’s another one of those things too. Ray Kurzweil and other futurists like that such as Michio Kaku (and perhaps Kevin Warwick), see those points as downsides to the ‘human condition’ for want of a better phrase. Getting sick, feeling sad, having a finite life: “these are all things that are undesirable.” Does that not in-and-of itself say how fundamentally disconnected those ideas are from reality? And also in conjunction with what we were talking about before about “The Bubble” and “screen culture”, and having your brain in a space which isn’t in the real world? Because to me, that says it all. If someone says, “I see being human, being alive, being a biological creature on a finite planet as undesirable.” It encompasses all of it. It’s basically saying we should be dead, we should be machines, not be human anymore, not live in reality anymore. And how is that going to happen? How’s that going to work?

Katina Michael: There are lots of different arguments to that point of view, the point of view that says, “I don’t want to die, I want to live forever, and I want to do away with my sarx—which is my body. I want to do away with my limbs because they have a physical lifetime. And I want to live forever, I want someone to flip a switch, make sure I’m always on, upload my mind to the Internet for example, and be free of physical spaces and dimensions.” The question is how realistic is this when there are people dying of a lack of food every day. The reverse argument says that if we all were to have our minds uploaded onto the Internet for example, or a data storage device in some way, in some shape or form, then we would do away with hunger altogether. But to that I always say there are technical failures. There are smart grid failures, energy failures—what happens if accidentally your smart grid powers off? And who’s going to be alive physically to turn off and on that switch? Don’t tell me a fallible machine?! There’s got to be someone always there, a human, using their mind, using their physical tactile fingers to actually do something to the physical, breathing, “storage network.” But this also presupposes that we are not spiritual beings and that what makes us up is simply “biology” without “spirit”. Yes, we might one day be able to tap into the mind, but there is something that makes us who we are, and that part cannot be replicated, no matter how hard we try!

Jordan Brown: And also that that possibility isn’t available for everyone? It’s only available to the few that can afford it. So the third world, for example—it’s not for them.

Katina Michael: So are we creating an elitist society? Those who can afford actually, can adopt these new technologies, just like people who have invested in cryogenics and other means of potentially keeping themselves alive and leaving their estate to themselves in a legal sense. So, “I can keep being cloned and coming back to life, and I’ve got my estate and I live my life again another hundred years, and if we stretch it to 103—very good.” But most people on Earth won’t be able to afford these elite services, if they do come into existence in the future, as has been proposed by many futurists. And the question is- what kind of life would that be? I like my body. Although, I acknowledge that there are people who are entrapped within their body, e.g. disabled people. I can see for and against arguments for this kind of lifestyle. I can see how we could free people who are trapped within their wheelchairs, and even within their minds in some syndromes through the upgrading of their mind—if that is ever to become a practical capability.

But if we think of the here and now, and what people really need today, it’s not more of that kind of thinking. We aren’t machines. We’re people. We’ve got blood rushing through our bodies. We’ve got veins, and we’ve got a heart that’s beating and pumping blood. We’ve got a pulse rate. I can touch you by bringing out my hand and I can sense your touch, I can sense your feeling. Do I wish to augment my body? Hey, it’s your body do with it what you want, but I should have the right to live how I wish as well. But if we don’t look at what is occurring to us as individuals, we may slowly succumb to becoming technology without realising it. And it does start with basic principles. It does start with having my mobile phone within reach when I am asleep and the question is whether the phone is an extension of me or I’m an extension of the phone. The more machines that we build around us that are “always on” in this Internet of Things, the more I become subject to that machine, rather than the opposite. I am at the mercy of the machine. I am at the mercy of my own creation. Is that really a world we want our children to be raised in?

* * *



Bearables: Another term for implantables, for technology that is embedded beneath the skin.

Behavioral Tracking: Refers to a range of technologies and techniques used typically by online website publishers and advertisers but may also include smart phone usage patterns allowing service providers to increase the effectiveness of their campaigns. Users are very often oblivious to the goings on as no previous consent has been sought from individuals for the tracking to occur.

Bubble: A metaphor for pervasive consumerism. Consumers remain unaware not only of their high-tech usage patterns but also of the bigger picture issues affecting them with respect to technology adoption. They are stuck in a ‘bubble’ so to speak and that bubble can burst at any time.

Drones: is a colloquiual term for unmanned aerial vehicles. Drones that carry ammunition are deployed predominantly for military and special operation applications.

ENIAC: Electronic Numerical Integrator And Computer was touted the first electronic general-purpose computer.

Implantables: are microchips that can be injected into the body. Form factors vary but usually include tags or transponders.

Off the Grid: being completed disconnected from any form of telecommunications, including landline telephone, smartphone, email, and Internet more broadly.

Push Marketing: is when a customised marketing alert comes to your smartphone based usually on your location. Usually these techniques offer purported discounts luring consumers to impulse buying. Traditional push marketing techniques include targeted mail order catalogues to your home, and email alerts based on data from online behavioural tracking.

Screen Culture: is a culture which is dominated by screens of all types but particularly digital displays. It may include digital billboards, television, gaming consoles, digital cameras, computers, smartphones, wearables digital glasses, or anything else that introduces another layer between the naked eye and the natural world.

Smart Grid: is a modernized electrical grid that uses information and communications technology to gather and act on information, such as information about the behaviors of suppliers and consumers. Smart grids are meant to improve the efficiency, reliability, economics, and sustainability of the production and distribution of electricity.

Smart Meter: is usually an electrical meter that records consumption of electric energy in hourly intervals and communicates readings back to the utility base on a daily basis for monitoring and billing purposes.

Storytellers: a position title in some large technology companies. One step removed from a technology evangelist, the storyteller sits somewhere between the salesperson and the solutions architecht, attempting to convince the client of the benefits of a given solution to their business problems. Storytellers are technically astute and are strong advocates for their company’s product/service lines.

Technology addiction: the state of being enslaved to a habit or practice or to something that is psychologically or physically habit-forming, such as all things high-tech, to such an extent that its cessation causes severe trauma. Most people admit they cannot forgo the use of their mobile phone or iPad devices for very long.

is an activist, artist, musician, independent film-maker and freelance journalist whose work focuses on the interface between the dominant culture and the real impact on people, society and the environment. He has won awards and industry accolades for his work, including the 2018 Edward Snowden Award, the 2017 Change Maker Award (NIFF), and the 2016 Honorary Award of the Ministry of Justice (Slovakia).