Desk research belongs to my daily routines. When digging the internet resources, clicking the cookies banner belongs to the routine. And it is one of those petty nuisances of the present-day world. I can imagine whoever made it happen, had good intentions. But does clicking ‘yes’ or ‘no’ on the banner protect me against losing privacy on the internet? No, I do not think so. With the internet of things that in private use goes tight with social media profiles and open-source operational systems, we are traced continuously while using our computers, laptops, smartphones, and smartwatches. Data based on our activities collected as we interact with hardware or beyond our knowledge became a valuable commodity. And day by day, I am becoming more and more convinced that as users, we now give more to the system than we get from it, only seemingly for free. And it is no longer our privacy that we give away.
The price of computer use. Part one >>>
LOSS OF CHOICE AND INFLUENCE
At the latest with the outbreak of the Cambridge Analytica & Facebook scandal, the world heard of data harvesting and opportunities of narrowcasting (otherwise called microtargeting). I am pretty sure that our data as a bulk commodity (we used to call it now big data) has quite a decent price. But because of narrowcasting based on data available on us to the internet giants like Microsoft, Facebook, or Google and who can trade with it, we lose much more.
Narrowcasting selects a targeted group by age, sex, location, and interests declared or based on your internet interactions. Declared means that we personally put our interest in the computers, for example, on Facebook. But personal declarations already lost on importance, maybe besides age and sex information. Cookies do more in this respect for years already. IoT interactions mean that our activities have been detected without us touching hardware by the GPS, voice support, or a wearable like a smartwatch.
The more keywords are defined, the narrower the target group gets. This way, a target group is selected or narrowcast. And later, the one who targets the group communicates a specific message only to those selected.
Narrowcasting is used in advertising, social and political campaigns, and news feeds. The idea is quite clear and easy to carry out with proper algorithms.
Just a simple example. Suppose a tour operator would be interested in wealthy and physically fit male clients, similar age and interest to promote a tour with adventure content and good socializing possibilities. He or she needs to define the potential customers through what they do and what they are interested in. Let us assume that these are male Europeans of a certain age, always booking five-star accommodation, physically fit, and reading the Financial Times. Age, sex, and location you get from the declaration that was made while registering on Facebook or Google. Facebook registration is free will. But if you are not logged on Google, you cannot use most adds designed for Android. So willingly or not, even with reservations, you register. Five-star accommodation is traced by, let us say, the Booking.com. To get the Booking.com app for the smartphone, you need to be logged in on Google, and you probably use Android that also belongs to Google. (To simplify I omit here the iPhone users). If you use a computer, so you do not need an app, you use Windows operational system. Windows and Android are open-source operational systems. The open-sourcing makes most of the data on us, even if we use apps developed by others, available also to those who designed them. It is … Microsoft, and Google. Besides, on your computer or laptop, you are probably even logged in on Google. Tracing that somebody is reading the Financial Times works the same way. You leave the trace when you browse, no matter whether on your phone or a laptop. A cookie by the Financial times rootes in your device. It is as simple as that. The data on physical fitness would be gathered by smartwatches or other wearables (like sports bracelets), compatible with Android, and often shared with social networks. Facebook comes in play here. I do not know particulars on how this is arranged technically. Even for a person with no knowledge in programming the possibilities for data harvesting are clear to understand. You do not need to be an expert in IT to connect the dots.
So the tour operator releases an ad that is shown only to those who suit the targeted profile. No harm. Those who collected data just sold them for a price. Just a kind of business. Many would say that they finally get information on things they are interested in. But narrowcasting goes far beyond that. The more detailed data, the higher the price. In Europe, data protection rules are quite strict. Data must be anonymized when given away to anybody. And, the best-paid is data with consent by somebody, who filled in a questionnaire (usually allured with some bonus) and allowed matching data with his name and profile. The surveys are, however, no longer sent at random. They target defined IDs worth scrutinizing. Of course, you may decline. Yet again, free will. No harm to those who refuse.
But what if we deal with a piece of fake news that is thought to manipulate the targeted people. Narrowcasting means in practice that only the target group is seeing the content. No other eyes would not have any chance to see that feed. In particular, those who could be potentially more critical about the content and whistle blow. Well, educated, wealthy males, are probably not a good example of people easy to manipulate. But what if a well organized populist political party, wealthy enough to finance complex data harvesting and commission narrowcasting, tailors the fake news on the political competition to a very limited number of people? People susceptible to manipulation, with no higher education, let us say living in a small community. Anybody educated enough to undermine the content is automatically excluded from a target group. One person says that he or she saw the news, the other one confirms as a second source, and so on, and so on. The fake news gets its own life and is easily treated as true. Many people confirm they read about it … And there is nobody who can get to their senses. Because those who could are deliberately excluded from the target group. It is not science fiction. It is happening already throughout the world. European countries with well-organized educational systems are no exception there. The list of countries, where Cambridge Analytica ‘helped’ with elections confirms it. Elections might be won today by a quite low vote advantage counted only in thousands. Ability to identify those thousands easy to manipulate and tools on how best to ‘work’ on them using social media is of high value. Access to public resources gained after won elections enables further narrowcasting campaigns. Even if the situation is getting worse for most taxpayers, it is difficult to break the wheel … Tangible losses for society are much higher than fees for narrowcasting advisers and gains by platforms that enable it. The late declaration by Google that ‘political groups would soon only be able to target ads based on general categories such as age, gender, and rough location’ is a relief. But what about other giants?
Narrowcasting is hence, not only about who is targeted. Narrowcasting is also about who is excluded. Excluded will be those who are tackled as not easy to manipulate or those on whom there is not sufficient data in the net. And it is not only politics. Some news or advertisements that could be interesting for us, never reach us, unless we browse them on our own or better to say get an idea to browse them. We are not perceived as potential addressees. Lack of a sufficient amount of big data matched with narrowcasting may lead to discrimination, as well. The latest news is that a digital credit card developed by one of the open-source operating system developers would not be granted to a female and Afro American community. Why? Because the analytics was done only on big data available. And the latter was available in sufficient amounts only on white males’ financial activities.
So being an active computer and net user hence revealing my interests or just the opposite by opting out tracking my activities, I will be excluded from access to some services or goods and political influence as a taxpayer. For me, it is losses that are getting more and more tangible.
SHARING AND OPEN SOURCING BALANCE
Jeremy Rifkin, the thinker from the US, talks about zero marginal costs in society. We share our knowledge and skills with others on the internet. Sharing means giving and taking. As a blogger, I know the rules, as well as costs and benefits. I upload information that I gathered in my posts, sharing it with others and take from others downloading or using that what they shared with the world by posting on the internet. The sharing economy it is called. Here I see the balance. Using the internet and sharing (giving and taking) makes my life easier and my work more efficient. I also see the difference as an academic, who rejoined the community after years in practice. The struggles I lived through twenty or even ten years ago, that limited my time to analyze and think, are no longer here. Yes, my marginal costs got lower.
With crowdfunding on the Internet, I have no problems at all, unless it is not a cheat. Even if money is collected for weird purposes. People have their free will before confirming a money transfer. Or, if somebody reached popularity in the net and earns on social media or blogs with advertisements. No problem if the activities are of a weird nature. No, I do not have a problem with it neither. For most of them, it is daily hard work. They often employ payable staff to help them out. A job like any other. A business like any other. If harm is done to others, I consider it a crime. But crime in the net is like other crimes. Pursued by authorities and punishable.
But what about crowdsourcing with no real consent by those who give or the knowledge that they give on their part. What about thousands and thousands of photographs people post on social media diligently hash tagging them, with hope to earn on their internet activity with no true chance to do so. Are those photographs and hashtags for vain as we might think? What about those making money on people’s vanity or worse created impression that anybody can earn on the internet?
We all know, the world is working on artificial intelligence. The internet giants are the leaders here. But what artificial intelligence might have to do with our social network’s activities and our struggle to get more likes, more followers, and real earnings on the internet? Let us connect the dots.
Artificial intelligence must be trained. The training of AI is about showing a machine (a computer) many samples with proper description. The simplest example is how to teach a computer to distinguish between dogs and cats on a picture. A child when you show him or her a dog and a cat will do, only told once. With machines, the process is more complicated. To train the machines to recognize dogs and cats, you need to show them a sufficient amount of pictures of dogs and cats. A machine would then analyze them pixel by pixel, feature by feature, find the similarities and differences. After analyzing many images, setting up algorithms, a machine would know how to recognize a dog and how to recognize a cat. The process is called deep learning of AI and is based on neural networks. The neural networks are told to simulate the processes in the human brain. It is, however, not some sophisticated biotechnology. Artificial neural networks are just layers of multiple functions with inputs and outputs. Just programming and computation. Unlike algorithms, however, neural networks do not require access to all knowledge about the issue, they can independently see solutions that the programmer did not foresee. Taking a picture of a dog and clicking (right button of a mouse) ‘search google for image’, you quite quickly get the very image and many ‘visually similar images’ of dogs. But try to do it with some objects. Quite quickly, you will see errors. The number of sample pictures has still been not sufficient to close the teaching process.
Now, how do you think, from where the Google guys training the machines and their mates, who worked on finding ‘visually similar images’ tool, took the sample pictures? They just went out and took them? No, they used thousands or millions of photographs posted and diligently hashtagged ‘dog’ or ‘cat’ by social media users or, if you agreed, in your cloud picture library. The same pretty much works for all possible other objects. But dogs and cats are just a simple example.
Now, let us think of a house robot able to recognize that we are sad and sprinting to our fridge to serve us a chocolate ice cream. Still out of reach. But not for far too long from now. How to teach a robot to recognize that somebody is sad? Yet again, we need millions of photos of sad people with a proper hashtag. And yet still, this was and is users who had provided them to the AI teachers.
I do not know how many working hours Google calculates already for its AI specialists, but I am pretty sure that all working hours by all social media users summed up outtake those of the Google staff. So the answer is, no, the work we make for free on social media is not for vain. Social media users are those who make the very working hours by AI teaching. Till we get the results of our work done together with Google people or any other staff of any other giant using our photos to train AI as the open-source added value in the net, we give and take. There is a balance. It is sharing. But, I am pretty sure that some AI algorithms are sold and will be sold at a market price to manufactures and service providers. Will the price of the robot serving us chocolate when we are sad include a bonus to those who committed to or their daughters and sons? We shall see.
Photos by: Andrew Neel, Negative Space, Quang Nguyen Vinh, Pixabay