Articles

Special Review: Beware of Big Tech Trickery (Follow-up to "17 Reasons Why Amazon Alexa and Other Voice Assistants And 'Smart' Speaker Devices are Junk For All Users")

 

By The Treasure-Sharer 

 

stop FacebookI didn't know how much I didn't know about Big Tech, until a series of recent events prompted me to want to look more into it, to write an article here about it. In the process, I turned to my man as a source to share my discoveries with -- only to have him share more about what he knew about it with me -- divulging secrets about Big Tech companies that I likely never would have learned by myself, but was able to both confirm and start to piece together and understand myself, after he pointed me in new directions that I would not have thought to explore, without his guidance.

 


censored

I was very motivated to do this, because Facebook had done some really sneaky censoring to prevent us from sharing what we had to say about a unique development in a recent lawsuit against Google that my man had told me to post about on our Gem or Junk Facebook page. Though we had referenced other big tech companies, but hadn't even mentioned Facebook in our post, we had the post silently removed by Facebook twice, before it finally allowed the post to remain, but removed the ability to see or access the link that we had attached, and gave the false reason to our audience that whoever I had shared the link from had changed the privacy settings of the post -- when I had not even shared the link from a person or group, but had selected and attached the link to the article myself.

 

Facebook's suspicious behavior, and the lengths that Facebook had taken to prevent my post from being seen, and then to limit the ability of my post to impact others, by removing the ability to access the article that would have given my write-up the necessary context to allow the posts' viewers to understand what the bits that I had quoted were referring to, and why I was making the assertions that I had included, bewildered me.

 

Not only had we not even mentioned Facebook in our post -- and were, in fact, asking viewers to re-examine and re-evaluate their use of Facebook's competitors' products, without even mentioning Facebook itself -- but our Facebook page doesn't even have followers, since we've never told anyone we know about it, asked anyone to like it, or advertised it, making it feel ridiculous that our little post wasn't even allowed to reach the few eyes and ears that it might have.

 

Why had Facebook responded so rapidly (even within minutes, or maybe even seconds) to my little posts?

 

I will go more into the actual contents of the removed posts and link in a bit, but I think that it's worth explaining what made us create the massive article that you are now reading, first. 

 

Originally, Facebook's action had bothered me enough to halt writing another article that I had planned and already started writing, to write a short article about this incident, so that we could explore what had happened, and what this said about how Big Tech operated -- in what I had planned to be a short follow-up to my special review on voice assistants and smart speakers.

 

Instead, it grew into the massive article that follows, as my man continued to unlock for me more layers to big tech plans and operations that I had never before been made aware of, but was finally able to understand on a much deeper level. 

 

Learning what I learned made me feel like I HAD to get it down and share it with others, so that more people could understand the bigger picture behind, and purpose for, Big Tech: the various entities and mechanisms that big tech companies have backing them, and at their disposal; the tricks, lies, and illusions that they use to keep us from discovering the truth and fighting back; and more.

 

It's imperative that we at least know what we're up against, what Big Tech has planned, and what our options are, so that we can at least have a chance to determine our own actions and outcomes, with the truth giving us the chance to make real choices.

 

confidentialMy man has been studying the things that are being hidden from us for so long that he will never have the time to share with me everything that he knows (especially since he keeps updating and adding to his knowledge every day), but I'm very grateful that I at least understand this area a lot more than I did before, thanks to how much I have learned from him -- and from personally investigating the things that he brought up and that later cropped up -- what is being hidden from us about Big Tech, and what is being used to trick us and truly violate our rights and harm us, through what is being kept from us.  

 

top secretI quickly realized that, although my studies at university, and my experiences dabbling with (and getting screwed over by) Big Tech, have made me a bit more aware of Big Tech's trickery than the general public, I'd barely touched the tip of the iceberg as to what is really going on with Big Tech -- the real mechanisms behind and reasons for Big Tech's existence, its extreme growth and popularity, and its insatiable and seemingly unstoppable expansions into every meaningful area of our lives that it is currently undertaking.

 

So, what started out as what I tried to keep a shorter post reviewing experiences that we had as a site, and that I had personally, with big tech trickery, has grown into the mammoth article that follows, as I have tried to relay the new understandings that I have come to develop about Big Tech, thanks to the insights and recent developments that my man has shared with me, and what I have discovered through further investigating what he shared with me.

 

We hope that after reading this article -- where I will go into the outrageous events that my man and I have experienced with Big Tech, the implications of these experiences, and the factors that have allowed Big Tech to grow so bold as to think that it can get away with (and, for the most part, actually get away with) things that no company should be allowed to engage in, and the things that I have discovered as I have uncovered more in the process -- that you will be able to really understand why Big Tech and big tech products cannot be trusted, and why we keep our use of them to a minimum.

 

Before I go into all the things that I've recently dug up about Big Tech, I'll tell you about some of the personal experiences that I'd had with Big Tech prior to my recent experiences with Facebook, that had already made me wary of Big Tech before deciding to write this article. 

 

 

What made me start distrusting Big Tech

 

Even before Donald J. Trump called out the fake news media, I'd already been wary of big tech bottom lines and bullsh*t, since I'd already learned over the course of my studies that pretty much ALL of the mainstream media was owned by six big companies, such that it was a bad idea to believe pretty much anything that big techs had to say, since every apparently "different" newspaper, TV station, radio station, or online news site was just rehashing the agendas that the powerful owners of their parent companies wanted them to say. 

 

I even had this confirmed by a guest speaker from the field of journalism, after I asked him if corporations controlled what journalists wrote, and if it was true that six big corporations controlled the media, and could force journalists to write what they told them to write, and fire them if they wouldn't comply, FAKE NEWSand the man had replied that there was no way that someone writing for a mainstream paper would be able to write about the stuff that we were discussing, and that everything that journalists write has to be written in a particular way, and cater to certain audiences. 

 

scammerI also learned very early on, back in 2012, how dishonest Facebook was with its business practices, when I was thrown into using Facebook ads, and had Facebook use so many sneaky tricks on me and on the business account that I had been managing, to try to make as much money as it could from me.

 

Here's what happened:

 

After having had my first attempts at creating and showing ads produce pretty good results initially, I unsuspectingly upped my lifetime ad budget, expecting Facebook to continue to show my ads at a steady rate to my desired audiences. Instead, I was dismayed to see Facebook blow through the entire budget within literally minutes, by blasting the ad as quickly as possible, to who knows what audiences. I only saw that it spent my whole budget without producing hardly any return on investment, since Facebook ads, as I learned from that first experience, are designed to suck as much money as they can, for as long as they can, when opportunity allows them to. 

 

Like an irresponsible teenage girl given free access to daddy's credit card for one shopping trip, it truly felt like Facebook was trying to take as much advantage of the situation as possible, burn through my ad money as quickly as possible, and drain it before I'd even realized what had happened.

 

It didn't even let me stop the campaign right away after I had caught what it was doing and paused the ad. Instead, it continued to squeeze in more ad showings to as many people as it could, so that, when I checked on it later, I noticed that the number of people that my ad had been shown to had nearly doubled since the time that I had clearly selected the option that indicated that I wanted my ad to stop being shown.


I found these practices to be not only totally unethical, but also completely unnecessary, since I had already planned to use the full amount of my budget money eventually, and would have, but had simply wanted my ad showings to be spaced out and sent to the audiences that were relevant to my campaign, as opposed to being spit out to any old person, like what had happened as soon as I'd decided to increase the amount of money that I had put into my campaign budget.

 

I quickly learned to put way less money into my budget in one go, and to be very careful with setting limitations, so as to not let Facebook trick me and steal so much money again.

 

However, because I had already upped my budget once, when I tried to set lower limits, Facebook began changing the lower limits that I had set to the higher limit that it had abused, without informing me -- like it was hoping that I wouldn't notice the change that it had made. It even changed my settings so that every new ad that I tried to create could only be activated if I went with the higher budget that I had previously agreed to. Facebook seemed very determined to box me in to agreeing to the big budget that it had abused when I hadn't known any better, and this greatly limited what I could do with my ad campaigns, but I managed to still be able to use them without being forced to agree to Facebook's bad terms, somehow. 

 

An article that I later found over the course of doing research for this article basically confirmed my conclusions about the experience of advertising using big tech platforms.

 

The article Algorithmically Incorrect: The Lies Big Tech Tells Advertisers, advises readers to "[p]ace to consumers, not budget," explaining that "[p]acing by spend is the prime directive of platform algorithms, because that's how they make money. It makes more sense, though, for audience-targeted media to be paced by the size of the audience, and their availability." What it's saying is that big tech companies pace how fast they spend your money by how much money you put into your budget, as it did with me, rather than directing it to the correct audience, and spending based on showing your ad to the audience that you want to show it to, which would entail, if the audience size is small, spending your ad money more slowly, as it will take more time to locate people in your ad's target market to show your ad to. 

 

The article reveals that, because spending your ad money is the priority of the companies, they cram as many impressions as possible into the best-performing line items -- wasting money on remarketing to previous customers, knowing that they are more likely to buy, rather than trying to acquire new customers for your business.

 

The article explains that "the algo can waste money on [the previous customer] without appearing as inefficient as it might if it tried to acquire more customers."

 

As the article says, "Only reporting average frequency hides what really happened," which is why the article advises customers to optimize their entire frequency distribution, not average frequency.

 

The article recommends paying more for quality, and discusses how "brands can do what platforms won't: decide for themselves what constitutes a quality ad exposure for any given campaign, working from a minimum of human-viewed on a legit site, and force algorithms to spend only within the brand's guardrails. This way, data reveals where quality exists in the media landscape, and what it costs."

 

The article says that "[t]he only good solution is to audit your ad-buying algorithm before spending a dollar through it."

 

My informal audit of Facebook's Ad Campaign is that it is unfair, and geared toward wasting customer money, which is why I haven't continued to use it.


Big Tech is about making money, and about manipulating whatever it can get away with to help it make even more money. robberAs my experience with using Facebook's ad campaigns showed me, Facebook wouldn't even let me have control over the most basic and critical aspects of my campaigns -- even when I was paying it for the use of its services. With how little respect it gives its paying customers, one can only imagine how it views and controls the actions of those who use Facebook for "free," and how it manipulates the public to give it even more power to control the people's thoughts and perceptions, so that it can gain even more power, and extract even more money from whatever other situations that it is able to manufacture and exploit.

 

Facebook's manipulation of the "options" that I was given, its attempts to impose higher ad budgets on me, and its disregard for when I pressed the pause button on my campaign, reminded me, in writing about this, of the title of the book Manufacturing Consent (which you can read a summary of its propaganda model of the media here https://chomsky.info/consent01/

), in the sense that Facebook forced me to go along with whatever IT chose for me -- not even heeding my command for it to stop throwing my ad money away -- just because I had given it access to it.

 

It continued to act like I had given it consent and free reign to do what it wanted, even when I had actively protested and tried to put a halt to it.

 

What I experienced is exactly what consumers of Big Tech realized, when they protested against a court ruling that Google should pay them $425 million for its privacy violations, citing that the amount and the punishment weren't enough. Like me, customers are getting sick of companies doing whatever they want, and being able to get away with it, either completely without consequence, like they were able to do with me, or with paltry penalties that don't make a dent on the profits that they made through the calculated bad behaviors that they knowingly committed.


We already went in depth into how big tech companies use their voice assistants, smart speaker devices, and other offerings to invade our privacy, and use our stolen information to better understand our psychology, to more effectively convince us to buy what they and their advertisers want us to buy, and influence many of our other behaviors, in our first special review on why voice assistants, smart speakers, and other similar big tech devices are junk for all users (which you should definitely read in full, if you haven't already, if you want to understand the bigger context of what drives big tech companies to develop and push the products that they sell to us as helpful to us).

 

We even explained in the article how, even with the number of lawsuits that big tech companies have lost, that have proven that they have lied to their customers about the data collection that their devices engage in, they pay the small (in comparison) fees that they are slapped with, and continue committing the same crimes that they were charged with.

 

We are now past the point where companies even care about manufacturing consent, as our consent has stopped even being a factor in nonconsensual corporate behavior. 

 

As my Facebook ads experience showed me, my "no" did nothing to stop Facebook from eating the rest of my ad budget, once I had transferred it into Facebook's clutches. My disapproval of their imposed new minimum budgets didn't make them revert to offering me their old minimums.

 

 

The post that scared Big Tech

 

What I found interesting was what I experienced when I tried re-sharing our special review on Facebook (in the post related to the one that I referred to earlier, that was taken down multiple times by Facebook), in response to a social media post that my man had found that had been made by the company Presearch, regarding customers demanding that Google be made to pay $1.62 billion more to them, after being forced by the court to pay $425 million to customers for its privacy violations.

 

I do the social media for Gem or Junk, so I re-shared the article that my man had found on X. You can see a screenshot of the post, as well as the text in the post, below:

 

twitter post about Google caseGem or Junk
@realgemorjunk
·
Oct 24

Consumers are starting to recognize that #Google, #Amazon, #Apple, & other "too-big-to-fail" companies will NEVER stop their profitable #privacy #violations. 

 

But are they willing to stop using the devices?

Our special review lays out more concerns:

 

🚨https://gemorjunk.com/Articles/voice-assistant-smart-speaker-devices-review🚨

 

I was happy to see consumers pushing back, and holding Google more accountable than the court was trying to let Google off with.

 

I didn't know that Presearch had made a Facebook post similar to the one that it had shared on X, that I could share, so for out Facebook post, instead of re-sharing a post, I linked to an article about the incident after looking it up myself -- using the search terms that Presearch had provided -- and published a post similar to the one that I had made on X. I did add a bit more to it, to take advantage of the lack of word limit on Facebook, by adding two quotes from the article, that I thought really captured the sentiments of those involved in the case:


"The jury found that Google's conduct was highly offensive, harmful, and without consent," 


and

 

"The plaintiffs called the $425 million damages verdict 'clearly insufficient to remedy the ongoing and irreparable harm that Google’s conduct continues to inflict.'"


I also included a bit more information about the special review that I was linking to, as well as some more hashtags.


I told my man that I had posted on both X and Facebook, and he told me shortly after that he had checked, and that my post hadn't been posted on Facebook. I was so confused, because I had been sure that I had posted it on Facebook, but rewrote my Facebook post, and posted it again in the evening, before my man arrived to meet me, and checked it again to show it to him, and saw that it was not there again.

 

It was then that we knew that the missing post hadn't been missing because I hadn't posted it, but because Facebook had censored us, and had taken down the post.


censored xI was shocked, because, as I mentioned earlier in this article, I hadn't even mentioned Facebook anywhere in the write-up that I had included in the share. I would have been wary of Facebook protecting its own interests and censoring my post if I had lumped it in with the bunch of companies that I had named, but I hadn't included it because I honestly hadn't thought to, since its voice assistant had already flopped and become disabled earlier this year. See the image above, capturing how Facebook "X"'d my post twice.


I didn't want to have to deal with the post AGAIN that day, so my man told me to share my post again the next day, but to screenshot it as soon as I posted it, in case Facebook took it down again.


So I recreated the post using the same quotes and hashtags, and made it as similar to the previous posts that I could remember (since I annoyingly hadn't saved my previous shares, since I hadn't realized that Facebook would mess with them).

 

The new post was posted normally initially, with the article that I had attached and everything still intact, and I screenshot the post with various levels of detail, blacking out my name, to retain my anonymity. When I opened our Facebook page again to see if the post had been taken down, Facebook surprised me again, because, this time, instead of taking down the full post, it made the article that I had attached to the post unavailable, making it look like the "owner" who had shared content that I was re-sharing had made it unavailable. 

 

See the screenshots below to see how my post originally looked when I first posted it, and how it looked two minutes after I posted it, where Facebook edited the link dishonestly to make it inaccessible to those reading our post.

 

original facebook post about Google casecensored facebook post about Google case


Well, it wasn't a share from another person or account that I had attached, but an actual link to an actual article (like my screenshot above shows), but Facebook made it look like it had been a share with a limited privacy setting, so that people wouldn't be able to read the attached article, and understand the context of my post, or the quotes that I had included.

 

See the link to the article that I had attached, that was made unavailable by Facebook, here: 

Consumers seek $2.36 billion from Google after privacy verdict.

 

The article was published by Reuters, a major news distribution agency, so the chances of the article being taken down or going down are very slim.

 

I'm guessing that Facebook realized that I wouldn't stop trying to make my post, and so decided to limit its impact, by not allowing people to see what my post was referring to, while hiding its involvement in manipulating the post so that my linked article couldn't be seen, by making it sound like a third-party sharer had been involved who had limited the access to the article, when, in fact, it had been Facebook itself that had done so.


That Facebook orchestrated such a simple-seeming, yet elaborately-effective "solution" to rendering my post ineffective made me wonder what other strategies it uses to suppress other posts that explicitly challenge big tech practices.  

 

Thankfully, my man made me aware that Presearch had also shared the same content that I had shared from it on X on its Facebook page, so I took down the post that Facebook had tampered with, and was successfully able to share Presearch's post instead. 


I was very disgusted by the deceit that Facebook had demonstrated, but even more alarmed at what its censorship said about the relationships among the big tech companies.

 
censored THRICEI already thought that it was bad enough that big tech companies control so much of the internet and social media landscapes as separate companies, but what Facebook's almost-immediate take-down of my social media post, not once, but TWICE, and then its almost-immediate take-down of the article that I linked to my post when I tried sharing it the next day, and framing of the article's inaccessibility in an untruthful way, showed me was that the big tech companies are not as separate as they seem, and are clearly looking out for each other and protecting one another. Their connections clearly run deeper than they may appear to. I thought that the image above captures how we were censored, not just twice, but THRICE, in less than a day, thanks to big tech companies covering for each other. 


As the article Understanding Big Tech Censorship and Supporting Free Speech says, "Big tech companies like Meta (formerly Facebook), X, and Apple play significant roles in content moderation and shaping public discourse. These companies implement policies and algorithms to manage and regulate the vast amounts of content generated by users on their platforms.


"Meta employs a combination of automated systems and human moderators to enforce its community standards. These guidelines cover issues such as hate speech, misinformation, and harmful content. Meta's approach includes removing posts, suspending accounts, and limiting the reach of certain content. The company has faced criticism for both over-censorship and under-censorship, highlighting the challenges in balancing free speech with maintaining a safe online environment."


"Silicon Valley, home to many leading tech companies, exerts considerable influence over public discourse. The policies and practices of these platforms can amplify or suppress certain viewpoints, shaping the information landscape. This influence raises concerns about the concentration of power in a few private companies and their ability to control the flow of information, potentially impacting democratic processes and societal norms."


All I said in my censored Facebook post was the following:

 

Gem or Junk

 

Published October 25 at 2:42PM

 

Consumers are starting to recognize that #Google, #Amazon, #Apple, & other "too-big-to-fail" companies will NEVER stop their profitable #privacy #violations.

 

"The jury found that Google's conduct was highly offensive, harmful, and without consent."

"The plaintiffs called the $425 million damages verdict 'clearly insufficient to remedy the ongoing and irreparable harm that Google's conduct continues to inflict."

 

But is anyone willing to stop using Big Tech devices?

 

Our special review lays out more privacy concerns, & other important issues:

 

 

https://gemorjunk.com/Articles/voice-assistant-smart-speaker-devices-review/

 

#PrivacyMatters #PrivacyProtection #privacyrights #PrivacyFirst #PrivacyConcerns #BigTech #BigTechAccountability #bigtechregulation #bigtechbattle

 

 

Contrast this with what I was finally able to post:

 

Gem or Junk

 

Published October 25 at 2:42PM

 

Consumers are starting to recognize that #Google, #Apple, #Amazon, & other "too-big-to-fail" companies will NEVER stop their profitable #privacy #violations.

 

But are they willing to stop using Big Tech devices?

 

Read about more concerns in the special review we wrote detailing the issues with voice assistants and other "smart" devices:

 

https://gemorjunk.com/Articles/voice-assistant-smart-speaker-devices-review/

 

#PrivacyMatters #PrivacyProtection #privacyrights #PrivacyFirst #PrivacyConcerns #BigTech #BigTechAccountability #bigtechregulation #bigtechbattle

 

 

 

The difference was that I had quoted examples of juries and other groups of people taking a stand against Google's ongoing and unrelenting harmful actions, that called out how Google was being let off too easy, and I had linked to an article that showed the people fighting against this. 

 

Clearly, Facebook took issue with our bringing attention to the rebellion of the people against Big Tech, and against rulings that fail to meaningfully address the harm caused by Big Tech, to push for real consequences and bigger punishments. It doesn't like people to know about cases of people uniting against Big Tech, and cases when they don't allow Big Tech to get away with the hand-slap rulings that courts give them.

 

Big Tech likes us to believe that we are powerless, and have to accept whatever it and the governments that it buys off and is funded by decide, which is why it is quick to bury and kill anything that suggests otherwise.

 

That I also used the post to highlight how the companies never planned to stop, and how customers were aware of this and wanted more accountability and more punishment for the companies' lack of action or remorse -- and that I put into question stopping using the companies' devices altogether -- ensured that Facebook would not allow my post to be seen, and be able to present alternatives to the self-serving narratives and images that Big Tech has worked so hard to create and cement.


When the biggest companies and sellers of privacy-invading products aren't even truly competing with each other, and are even taking down truth reflecting the dirty business practices of their competitors, it's clear that the "separation" between the different companies selling us one smart speaker over another, and offering us one search method over another, is just one big illusion designed to make us feel like we're choosing between different options, when, in reality, all the "options" are pretty much the same.

 

It's like what my man taught me years ago: that "choice" today is really an illusion of choice -- when the REAL choice is to say "no" to all the fake options, and choose to use the options that AREN'T made with an agenda to trick and enslave us in mind.

 

NO


It's the reason why we continue to use alternatives like the Brave browser and Presearch search engine to allow us to be free of things spying on us and selling us things we don't want as we search, and why we refer others to use Presearch too, with our affiliate link (which we have attached to the end of this article, in a referral banner that you can use to sign up for it, if you want to try it out, while helping support our site). 


We know that the great majority of what is on the internet is set up to collect our information without telling us (or at least bury the fact that it does so in too-long terms and conditions that no one reads), and manipulate us to believe the agendas being pushed, and that's why my man has worked so hard to find us alternatives to use that don't do this, or at least minimize it.


jailedMy man has had a lot of firsthand experience with Facebook censorship, as he has already landed in Facebook "jail" a number of times, for posting about issues like the one that he told me to post about on our Facebook page, and many other important issues that big companies and the people in power don't want the public knowing about. Like in the image to the side, his posts have been barred from appearing on his followers' newsfeeds, so that his followers have to go to his actual page to see the content that he posts, if that is even shown there (as I know that I couldn't see some of his posts even when I went on his Facebook wall through my own account).

 

 

Why we decided to warn you to "Beware of Big Tech"


Big tech companies do a lot to trick us into believing what they want us to believe, giving them information that we don't know that we're handing over to them, and buying (and buying into) whatever they're selling -- and that's why we need to be aware of what they do, and beware and be wary of ALL of the products that they produce, the projects that they are involved in, and the companies and organizations that they partner with and invest in.

 

Beware of the Dog signIf only the things that Big Tech was involved in came with "Beware of Big Tech" signs, in the same way that houses with dogs often come with "Beware of Dog" signs, like the one in the image to the side. Then people could become aware of how far-reaching Big Tech's involvement, impact, control, power, and reach really extend, and have something to make them think twice about the safety of using anything associated with big tech companies, which have ballooned to become involved in nearly every aspect of our lives.

 

This article, which started out as an extension to my list of "17 Reasons Why Amazon Alexa and Other Voice Assistants And "Smart" Speaker Devices are Junk For All Users," has developed into an article of its own, as I have uncovered more and more about Big Tech, both through more information that my man has shared with me (including little known facts about big tech companies, such as their real origins and purposes, and updates about new lawsuits being filed and won against Big Tech, and new more and more outrageous encroachments on our rights and freedoms that big tech companies continue to announce and impose on us), and other huge things that I came across while looking deeper into Big Tech.

 

As I mentioned at the start of this article, this special review brings together the big tech findings that I have gathered and amalgamated, to help you understand why my man and I do our best to stay away from and minimize our contact with Big Tech as much as possible (though we haven't been able to cut ourselves off from using it completely, under current conditions), and why we want to at least inform you of some of what we know, to help you decide how much you want to integrate Big Tech into and keep Big Tech in your life.

 

Now you that have learned some of the tricks that Big Tech has pulled on us to censor us and extract as much money as it can from us, read on to know why you should beware of big tech trickery, and 20 things you should beware/be aware of, so that you can hopefully know more and protect yourself better against Big Tech. 

 


1) Big tech companies censor information about their bad practices, keeping what they are able to hide from being shared with the public -- and do so in sneaky ways that no one would ever know about, unless they experienced having their own content censored (like my man and I have).

 

While you may have heard about the biggest scandals that have broken out about Big Tech, such as the Cambridge Analytica scandal in 2016, that was too big to hide (see Cambridge Analytica case: last wakeup call before GDPR for a brief summary about it) -- and finally gave people a taste of how much of their private information that companies had access to, and could be leaked and misused without their consent -- the immense amount of control that big tech companies have over our access to information, and over the spread and reach of content, actually prevents most people from hearing about anything that Big Tech can get away with keeping from us.

 

The article Big Tech Methods Change — BUT Secondhand Effects of Censorship Remain confirmed what my man and I already knew from experience, saying that, "[a]s the ill effects of censorship continue to be exposed, Big Tech's methods have become more elusive in recent years. The days of content or accounts being censored directly have dwindled. Instead, many big tech companies have ramped up less transparent censorship practices, such as shadowbanning, freedom of speech but not reach policies, or search suppression."

 

I'll go more in-depth into what these three sneaky censorship practices are, and give examples of what we have personally experienced, when relevant, to help you understand how bad these tricks are, and how effective they are at blocking anything that the corporations and their financiers don't want shared.

 

--

 

 

Shadowbanning

 

face half-hidden in shadowShadowbanning is something that happens to my man's content all the time. As the name suggests, and the image to the side shows, content that platforms would prefer to ban (or prohibit from being seen) is relegated to the shadows -- to the bottom of the timelime, or to other places where it can't be seen -- to keep people in the dark about it. The article How Shadow Banning Can Silently Shift Opinion Online describes a study in which Yale School of Management's Tauhid Zaman and Yen-Shao Chen show how a social media platform can shift users' positions or increase overall polarization by selectively muting and amplifying posts in ways that appear neutral to an outside observer. 

 

As one of the researchers said, "Shadow banning is hard to spot because the opinions that are muted depend on their stance relative to other users -- resulting in a mix of shadow-banned and amplified users, without any obvious rhyme or reason. If, for example, a network's goal is to move the collective sentiment to the left, the network might choose to show the content of a moderate user to a relatively right-leaning connection (to pull that connection leftward) -- but block that same content from the timeline of a left-leaning connection (to keep that connection from moving even slightly toward the right). At first blush, the banning appears to impact every user more or less equally."

 

Zaman argues that this is a more potent means through which social media platforms can control collective opinions over time than the outright removal of objectionable content or users, with part of this tool's power deriving from the fact that it's currently near-impossible to uncover, even by policymakers or software engineering experts, as it limits the broader visibility of a user's content without their knowledge, with a Facebook or Instagram post that's been subjected to shadow banning remaining on the original poster's profile page, but appearing less, or not at all, in the timelines of other users.

 

Facebook uses so many ways to limit the visibility and reach of my man's posts, and examining some of the ways that it does so should give you a good idea of how hard Big Tech works to keep the truth from being shared. On top of sometimes having his posts taken down with fake "fact checks," at times, my man has been overtly notified that his posts will appear at the bottom of people's timelines, and he has also been prevented from being able to post certain things, and has also been barred from posting for certain periods of time. There have been other times when we have discovered that some posts that he has made are not visible to others, even when people go to his actual Facebook page and scroll down his timeline to see them. (I know, because he has sometimes told me to go look at a post on his wall, and I couldn't find it or see it.) We also noticed that the number of views displayed by the videos that he uploads are artificially lowered, such that the number of views that a video was said to have while viewing it from my account was significantly lower than the number of views it actually had (that my man could see from his account). This not only enables Facebook to circulate his videos less, due to its manufactured "low" view count, but also makes his audience perceive his posts and videos as having less value than what they really have, by making them appear to not warrant having more views, thus leading to fewer clicks from people who might have been interested in seeing the videos otherwise.

 

 

Freedom of Speech, Not Reach

 

The article Freedom of Speech, Not Reach

A business strategy to maintain corporate media monopolies while avoiding constitutional first amendment breaches, which critiques Twitter's Freedom of Speech, Not Reach policy as openly employing and celebrating a disguised form of shadowbanning, says that, while the key issue presented and defended is brand safety, there are two main subtexts, with the first being revealed in the marketing (ergo propaganda) catchphrase "Freedom of speech, not reach" -- "for those who post things that are judged as lawful, but awful, you get labeled, you get de-amplified, it cannot be shared, and it is demonetized." To this, the article asks how the posts are judged, and by whom. The article describes the second subtext as more subtle, and as revolving around suppressing alternative media voices and content.

 

It says, "This is clearly a strategy specifically designed to evade [U.S.] Constitutional Bill of Rights First Amendment restrictions... However, it is also a strategy which acts to sustain the monopolistic protections and practices which corporate 'mainstream' media is desperate to defend[,] as citizens throughout the world are turning to 'alternative' media sources for news, opinions, and other information which does not align with governmentally 'approved' narratives (ergo government propaganda)."

 

The article notes that some legal experts assert that such "do not amplify" strategies that throttle various messages and content at the "suggestion" of the government are equivalent to viewpoint discrimination, which violates the free speech, freedom of press, freedom of assembly, and freedom of petition clauses of the first amendment.

 

The article describes the "monopoly" part of the current media ecosystem: "The 'traditional' business model pursued by corporate media and the frenemy oligopolies that have come to control virtually all large information outlets has been one in which they have been allowed to completely control the Overton Window -- the window of allowed discourse and range of politically acceptable policies made available to the mainstream population at a given time."

 

It explains how one set of consequences of this corporate media consolidation has been a creeping investigational laziness, and comfort with the status quo, where independent reporting is rare, and where the simplest and most lucrative path forward for these media oligopolies is to push out information which is consistent with governmental and corporate needs and desires, since governments, corporations, and their public-private partnerships have become the main customers of the media oligopolies (since end consumers of media products want them fast and cheap, or free, and are viewed by corporate media's current primary customers more as objects to be manipulated, propagandized, and marketed to).

 

This both confirms and gives more insight into what I discussed earlier in this article, regarding the concentrated ownership of all the main media companies by only a few big companies dictating what content and viewpoints are allowed to be put out, and the purpose for this.

 

The article says that social media has also largely adopted and consolidated this same business model through mechanisms such as the World Economic Forum-promoted GARM (Global Alliance for Responsible Media) agreement: a cross-industry initiative founded by the World Federation of Advertisers (WFA), the BBC-led Trusted News Initiative, and Google AdSense. The "Trusted News Initiative" has grown to become the umbrella organization that unites social media and corporate media under a monopolistic trade organization that seeks to defend the global corporate media oligopoly against the intrusion of alternative media, while Google and its AdSense operation have become the tool used to systematically deny advertising dollars to alternative media that Google has determined have transgressed, by circulating content that is determined by Google to go against the approved narrative (or interests of the administrative state), such as when Google banned ads from running on stories spreading what Google determined to be "debunked coronavirus conspiracy theories."

 

The article also describes how corporate and mainstream media's traditional business model of making money from subscribers and advertising has been failing for quite some time now, resulting in much less depth and complexity in their staffing, and in the work they produce -- which has resulted in their failure to fulfill the traditional competitive "investigational news" role that has served as a historic brake on government and corporate corruption and malfeasance.

 

The article describes how "[i]nto the breach has stepped a variety of largely volunteer 'minutemen' journalists which fuel alternative media information streams... And these alternative media sources now increasingly fill the niche for true investigative journalism which has been abandoned by corporate media." 

 

My man is one of the people who have stepped up to investigate and report real news and truth, and this article is an example of a true investigative piece that you will never find on the mainstream media anymore.

 

As the article says, "Leaving news stories mostly or totally uncovered if they feature inconvenient narratives is similarly a norm."

 

 

Search Suppression 

 

GoogleSearch suppression makes it so that articles like this one can be suppressed in search engine results, so that the limited access and viewership make it difficult for their important but dissident messages to make their way into public discourse.

 

My man has to deal with search suppression every day, and has shown me so many examples of false narratives, and ones that discredit the true story, dominating the first few pages of searches, while burying and hiding truth, such that one has to already know how to distinguish between what's real and fake, and how and where to look, to get the real facts. 

 

The article Censorship in Search: How Search Engines Shape Our Access to Information says that censorship in search engines refers to the suppression or restriction of access to certain types of content or information, which can be the result of direct intervention by governments or other powerful entities, or an indirect consequence of the way in which search engines are designed and operated. Deliberate suppression of information in search engines occurs when specific content is intentionally excluded from search results due to external pressure from governments, regulatory bodies, or other powerful entities -- aiming to control the narrative, and restrict access to certain types of content that these entities consider harmful, controversial, or contrary to their interests. 

 

"By intentionally excluding certain content, search engines can limit the diversity of perspectives available to users, restrict freedom of expression, and undermine the democratic process." 

 

The article describes the commercial interests in search engines as playing a significant role in shaping the content and ranking of search results in search engines, as, since most search engines rely on advertising revenue to sustain their operations, they may prioritize content that is more advertiser-friendly or serves their financial interests, which can lead to a skewed representation of information, and suppression of content that may not align with these commercial interests. It lists the following as some ways in which commercial interests can impact search engines:

 

Advertiser Influence: Advertisers often prefer to have their ads displayed by content that aligns with their brand image or target audience, which can lead search engines to prioritize content that is more appealing to advertisers, potentially at the expense of controversial, niche, or less popular content that might not be as attractive to advertisers.

 

Sponsored Content: Search engines may display sponsored content or ads prominently in search results, influencing users' exposure to information, with users potentially unable to recognize the distinction between paid and organic content, even despite having sponsored content labeled. 

 

Click-Driven Algorithms: To maximize advertising revenue, search engines may prioritize content that generates more clicks and engagement, as this can translate into more ad views and revenue, which can lead to the promotion of clickbait, sensationalist, or controversial content over more nuanced or informative content that may not generate the same level of user engagement.

 

Monetization Strategies: Search engines may develop features or services aimed at increasing revenue, which can impact the way that content is ranked and displayed (e.g. search engines may promote e-commerce listings, local business results, or subscription-based content over other types of content, to generate additional revenue).

 

Market Dominance: The dominance of a few major search engines in the market can further concentrate the influence of commercial interests on search results, with the limited competition enabling these search engines to have more control over the information landscape, making it difficult for users to find alternative sources of information that may not be influenced by commercial interests.

 

The article gives some ways to counteract the influence of commercial interests in search engines, such as exploring alternative search engines or information sources that prioritize user experience, privacy, or unbiased content over advertising revenue. Other ways that it lists include fostering competition among search providers, and advocating for more transparent and equitable search engine practices, which can create a more diverse and balanced information landscape that serves a broader range of user needs and interests.

 

--

 

Shadowbanning, "Freedom of Speech, Not Reach," and search suppression were some of the sneaky forms of suppression that the Media Research Center (MRC), a research and education organization, tracked in its study of secondhand censorship.

 

Using MRC's exclusive CensorTrack.org database, MRC Free Speech America researchers identified and documented 983 cases of censorship in 2024 (excluding cases of X Community Notes), stating that this censorship translated to 148,076,868 times that platforms harmed social media users, by preventing them from viewing content posted by the accounts that they chose to follow. They said that this phenomenon is best thought of as the "secondhand censorship effect," explaining how traditional social media platforms like Facebook, Instagram, and X have seemed to wind down their direct censorship of large accounts, and, instead of removing huge swaths of content and suspending influential personalities, are finding new ways to censor that are more difficult to identify, track and quantify -- like in the case of Google search suppression.

 

"As a result, a single documented case of censorship is not always a clear indicator of the broader picture of censorship. Rather, it reflects a small dot of the picture."

 

As the article says, "In some cases, deeper research techniques have become increasingly more necessary to document censorship in an effort to capture what psychologist and censorship researcher Dr. Robert Epstein refers to as 'ephemeral experiences' or subliminal unquantifiable attempts to silence speech or restrict reach."

 

It's clear that, as the ill effects of direct censorship continue to be exposed, big tech companies trick us into believing that they are censoring less, by engaging in secondhand censorship: less transparent censorship practices that allow them to censor individuals in more elusive ways that still allow them to limit our free speech.

 

As the article Big Tech Lies says, "It's not quite a matter of outright, malicious lies in the classic sense. It's far more subtle, more pervasive, and in many ways, more dangerous. It's about the truths they conveniently gloss over, the critical details buried deep in terms of service agreements no one reads, the unspoken implications of their business models, and the narratives they meticulously craft to present themselves as benevolent facilitators…"

 

It's imperative that we recognize what they are censoring and lying about, and actively seek out alternatives that allow us to stay informed and able to make informed decisions, rather than let Big Tech and its backers determine everything for us.

 

 

2) Big tech companies fund fact-checkers, and buy media companies, to control the narratives and information put out about them, and any other areas that affect them and those who back and fund them -- allowing them to influence and manipulate how the public perceives, interprets, and responds, in ways that benefit Big Tech's bottom lines, and further its wider, deeper agendas.

 

On top of censoring through more secretive means, big tech companies also fund "fact-checkers" to discredit anything that goes against the narratives that they want to push.

 

fake newsAs the article Gates Foundation funds Facebook fact-checkers that defend it from allegations says, "The Bill and Melinda Gates Foundation provides over $250 million dollars in funding to news organizations, charitable organizations affiliated with news outlets, journalistic organizations, and fact-checking groups that regularly give investor and philanthropist Bill Gates and the Gates Foundation favorable coverage, according to an in-depth report from Columbia Journalism Review. These funded groups are the real organizations behind "fake news."

 

"The Gates Foundation provides this funding through charitable grants, and has given over $2 million to groups such as fact-checker Africa Check ($1.48 million), media company Gannett ($499,651), and the journalism school the Poynter Institute ($382,997), and these groups have, in turn, defended or favorably covered Gates and the Gates Foundation in their fact-checks."

 

The article goes into how Facebook works with fact-checking partners that are certified by Poynter's International Fact-Checking Network (IFCN), how Africa Check and Politifact are both Facebook fact-checking partners, and how Facebook CEO Mark Zuckerberg has confirmed that when a warning label gets applied to Facebook posts after they're fact-checked, this drastically cuts their viewership, and results in users not clicking through to the content 95% of the time, such that "the decisions of these Gates-funded fact-checkers can determine how well content about the coronavirus or vaccine health concerns performs, with a "false" rating cutting its click-through rate by around 20x.

 

vaccine censorshipWho Funds Facebook Fact Checkers? says, "While presented as a tool to protect the public, what it amounts to is blatant censorship, which can easily push certain agendas into public view while silencing others." 

 

The article goes into a lawsuit filed against Facebook, Zuckerberg, and the fact-checking organizations Science Feedback, Poynter Institute, and its subsidiary Politifact, where the nonprofit group Children's Health Defense (CHD) alleges that Facebook censored information that the CHD shared regarding vaccine safety and 5G health concerns, comparing Facebook to the printing presses of 17th century England (through which the government controlled free speech), alleging that the U.S. Centers for Disease Control and Prevention and WHO actively partnered with Facebook to censor speech from the CHD that was critical of government policy.

 

The article goes into how, in regard to Facebook and Zuckerberg, the suit alleges, "At a time when the social media platform and its creator claim to be exponents of free expression and the scientific method for discovering truth, this case reveals the opposite: that they are indeed censors, and opponents of real science and open debate."

 

Checking the funding sources of other fact-checking sites shows us how they have received funding from Big Tech. For example, the FactCheck.org funding page shows how this project of The Annenberg Public Policy Center of the University of Pennsylvania received funding from Facebook, Google, and YouTube. 

 

I still feel really bad and stupid for forwarding to my man a "fact-check" from one of my Facebook friends at the time, who had sent me a link to a Snopes post that deemed information that my man had shared about something to be false. We hadn't known much about Snopes at the time, so my man decided to take down the related post while he investigated, and we were both very annoyed to discover that my man's information had, in fact, been correct, and that Snopes' "fact-checking" was made-up bs.

 

I got to experience the search suppression that I discussed earlier while looking for information about Snopes for this article, as all of the articles that came up during my initial search vouched for Snopes' accuracy and credibility -- including Brave's AI feature -- and I had to include very specific information that I remembered about Snopes' founders to finally uncover what used to be easy to find, about just how uncredible Snopes and its employees are.

 

I had used the Brave search engine that was the default search engine on my Brave browser for the search, and my man later told me that it bases its search results on Google -- hence its heavily biased results (and hence why it's important to use Brave for what it's good for [ad-free browsing], and use Presearch for our searches).

 

I had to dig to find the article MEET YOUR FACT-CHECKER which says, "Snopes.com, which claims to be one of the Web's 'primary resources' and 'painstakingly researched and credible,' ... was founded by husband and wife Barbara and David Mikkelson, who used letterhead claiming to be a non-existent association to start their research."

 

"Now they are divorced -- Barbara claims in legal documents that [David] embezzled $98,000 of the company's money and spent it on 'himself and prostitutes.'"

 

The article says that the founder's new wife, Elyssa Young, is employed by the site as an administrator, and has worked as an escort and porn actress, and, despite claiming that the site is non-political, ran as a libertarian for Congress with the "Dump Bush" program.

 

The article also reveals that Snopes' main "fact-checker" is Kimberly LaCapria, whose blog "ViceVixen" says that she is in touch with her "home page," and wrote on Snopes.com while smoking pot.

 

The article makes a point of showing how the company that is involved in a bitter legal dispute between its co-founders, with the CEO accused of using company money for prostitutes, is one of the websites used by Facebook to arbitrate "fake news," and is part of a panel used by Facebook to decide whether stories that users complain are potentially "fake" should be considered "disputed."

 

fake hoaxYou can read a more in-depth look at Snopes and its employees in the article 

Facebook’s Snopes Fact-checkers -- a Prostitute, a Dominatrix, an Accused Embezzler, which lists some examples of the biases, fallacies, and falsehoods that Snopes was called out for making, according to the Caller, which include the following: 

 

• "TheDC exposed a Snopes lie about the lack of American flags at the Democratic convention, trying to pass off a picture from day two of the convention as though it were from day one."

• "[A] Snopes attempt at discrediting a news story from The Daily Caller News Foundation was riddled with factual errors and omissions."

• "Lacapria even tried to contradict the former Facebook workers who admitted that Facebook regularly censors conservative news, dismissing the news as 'rumors.'"

 

dKWLike the image to the side suggests, Snopes itself is a fake.

 

In addition to funding news organizations and fact-checkers to support the illusion of "fact-checking" that Big Tech has created and funded to create the illusion of the existence of secondary sources that "independently" verify or discredit whatever Big Tech and its cronies want, Big Tech has also outright bought the voice of the media, by literally buying out existing news organizations, partnering with them, or creating new media outlets that it can fully control the narrative within. Some examples of this include MSN partnering with NBC to form MSNBC (now renamed MS NOW, which was later sold in full to NBC), and Amazon buying The Washington Post.

 

As the article, How big tech is creating its own friendly media bubble to 'win the narrative battle online says, some of tech's most powerful people are increasingly found on a constellation of shows and podcasts like Sourcery that provide a safe space for an industry wary, if not openly hostile, towards critical media outlets -- some created by the companies themselves (such as Palantir and Andreessen Horowitz creating their own media ventures), and others friendly to their mutually beneficial relationships with the heads of tech's largest companies that they can score interviews with and support from, including Mark Zuckerberg, Elon Musk, Sam Altman, and Satya Nadella.

 

fake newsThe article says, "At a time when the majority of Americans distrust [B]ig [T]ech and believe artificial intelligence will harm society, Silicon Valley has built its own network of alternative media where CEOs, founders[,] and investors are the unchallenged and beloved stars. What was once the province of a few fawning podcasters has grown into a fully fledged ecosystem of publications and shows supported by some of the tech industry's most powerful."

 

This is another example of fake news, as the image above shows.

 

Buying media also gives Big Tech political influence. As the article Why Amazon's Jeff Bezos bought The Washington Post theorized as being one of Bezo's potential reasons for buying the newspaper, it said, "He's buying political influence."

 

The article quotes Brian Dudley from The Seattle Times as saying that Bezos is above all a shrewd businessman, and that, while newspapers are "limping financially" these days, "they continue to have considerable power and influence, particularly over government and especially if you're talking about The Washington Post"

 

Dudley is further quoted as saying that, yes, the Amazon founder is getting one of the nation's pre-eminent newspapers for "less than his peers may spend on a boat," but "the curious Seattle billionaire is also getting the best seat at the table in Washington, D.C., an opportunity to more directly influence America's future," and the keys to a journalistic institution that "nearly every adult in the U.S. has read or been affected by." 

 

Brad Stone, who wrote a book on Bezos, affirmed that "you can't escape that Bezos is "buying a lot of political influence."

 

It is important to stay aware of the fact that much of what is being presented as "fact," and what is being flagged as "false" is often not based on truth, but on big tech-filtered narratives and agendas, so that becoming aware of and using alternative sources of information is imperative, to avoid falling for big tech manipulation of our concept of "fact" and "fake."

 

 

3) Big Tech uses its pervasive connections to directly and indirectly influence the government in unquantifiable ways -- far more than its direct lobbying reflects.

 

Big Tech's influence over the government and politics extends far beyond purchasing media to influence political and public opinion through it.

 

The article Breaking up with Big Tech: A human rights-based argument for tackling Big Tech's market power identifies how big tech companies use lobbying as a tool to ensure that they maintain their powerful positions -- but goes into how much bigger their lobbying and influence are than what their direct lobbying reflects.

 

The article Advocacy vs. Lobbying: Understanding the Difference defines lobbying generally "as any attempt to influence a politician or public official on an issue, and direct lobbying as "any attempt to influence new or existing legislation via communication with a member of the legislative body or other government representative who has a say in the legislation."

 

taking bribeIt explains how, in 2024, in the U.S. alone, Meta reported over $24 million worth of spending on lobbying, Amazon reported just under $18 million, Google reported over $12 million, Microsoft reported over $9 million, and Apple reported just under $8 million, while, in Europe, the five companies spent between €35 million and €39 million on lobbying in Brussels in 2024, putting them all in the top 10 highest-spending corporate lobbyists, and giving them significant access to governmental influencers, such that, according to their own reporting, the five big tech companies managed to secure 1,235 high-level meetings in Brussels in 2024, which gave them greater access to meet around the development of specific legislation, such as in 2023, during the development of the EU's AI Act, with industry getting 86% of meetings on the file with high-level officials at the European Commission.

 

In addition to the millions of dollars spent on direct lobbying that allows big tech companies to buy the ear, support, and voice of members of legislative bodies or other government representatives who have a say in the legislation, to help them influence new or existing legislation in Big Tech's favor, big tech companies also have close relationships with think tanks, consulting firms, and business/trade associations that also lobby on their behalf, and with the priorities of their members. For example, Meta was reported to have spent between €725,000 and €1.5 million on lobbying work through consulting firms and lawyers in 2024, on top of its own lobbying spending. As well, in 2024, Amazon listed 89 memberships and sponsorships on the EU Transparency Register, Meta listed 69,237, Microsoft listed 59,238, Apple listed 42,239, and Google listed 70. 

 

"All five [b]ig [t]ech corporations have listed themselves either as direct members or indirect members (through their national member organizations) of The Association of European Employers (BusinessEurope) -- the largest lobby group in Brussels -- as well as DigitalEurope, the largest digital business association (by lobbying spend).

 

It is not surprising then that BusinessEurope, which the article describes as a major force in EU policymaking, representing organizations from a vast array of sectors, and working on almost all issues of interest to industry, as well as on the development of the European project as a whole, at the forefront of the "Better Regulation" agenda, "has also been a vocal advocate of recent moves by the European Commission to 'simplify' -– or deregulate –- existing [EU] legislation, including legislation related to technology companies, such as the General Data Protection Regulation (GDPR) and AI Act."

 

The article goes on to say that many big tech companies also have close relationships with governments through public sector contracts across the world, with Microsoft and Google framing these projects as "partnerships" with governments, and even the Vice President of Global Market Development in Microsoft's Worldwide Public Sector team stating, "We also recognize that in the context of government operations, the actual technology is just part of the overall conversation; policy and regulatory settings must be considered and shaped, so too must the societal objectives of what governments want their country to achieve over the next 30+ years be factored into technology decision making (emphasis added)," with the article translating this as meaning "Microsoft couples the company's public sector contract provision with influencing regulatory agendas." It points out how, given the sheer scale of these contracts globally, the level of access and influence achieved through these contracts is significant.

 

And when lobbying doesn't work to let Big Tech gets its way, the article explains how big tech companies have been known to abuse their market power, by threatening to withdraw from a market. It gives the example of how, after Nigeria's Federal Competition and Consumer Protection Commission (FCCPC) ruled that Meta had violated data protection and consumer rights laws, Meta threatened to withdraw its services from the Nigerian market, with the Director of Corporate Affairs at FCCPC characterizing the threat as an attempt to provoke negative public sentiment and pressure the FCCPC to reverse its decision. 

 

briberyThe article says, "We will likely never fully know the extent of Big Tech's lobbying power[,] as many interactions go unreported[,] and the level of influence of these companies is impossible to quantify. However, we do know that Big Tech [is] spending a significant amount on lobbying policymakers. We know that these companies often have disproportionate access to decision-makers. And we know that while there have been strides in digital regulation during recent years -- such as the Digital Services Act and Digital Markets Act in the EU –- legislation that fully addresses the problematic business models of Big Tech is still wanting. Ultimately, lobbying is a tool [big techs] use to ensure they maintain their powerful positions."

 

With the amount of public lobbying already astronomic, who knows how many backdoor dealings take place that influence big decisions, such as is featured in the image above.

 

How can laws hold Big Tech in check, when Big Tech employs so many strategies to influence the law in its favor?

 

 

4) Big tech companies have come to dominate technology under the guise of creating and democratizing technological advancements and innovative solutions, while in actuality doing so to serve their own interests of creating an environment instrinsically favorable to their existence and expansion, and strategically expanding their influence in all sectors.

 

 

Tinnovationhe article Why and how is the power of Big Tech increasing in the policy process? The case of generative AI goes in-depth into investigating how big tech companies use their technological monopoly and political influence to reshape the policy landscape, and establish themselves as key actors in the policy process. It explores Big Tech's agenda-driven dominance of the technology stream, describing how, unlike the innovation-centric technology stream, which is aimed at technological advancements and innovative solutions (like is pictured in the photo above), the Big Tech-centric technology stream is calibrated to serve two interrelated goals: the first, to create a political, policy, and sociocultural environment that is intrinsically favorable to the existence and expansion of Big Tech -- often leading to self-governance mechanisms and a regulatory landscape characterized by minimal governmental oversight; and the second to accelerate the diffusion of specific technologies [(most recently, generative artificial intelligence (GenAI))], across diverse sectors of society -- with the overarching objective not so much the democratization of technological innovation, but, rather, the strategic expansion of Big Tech's influence both vertically within its core sectors, and horizontally across new, often tangential sectors, "exerting universal and ubiquitous influence within and across streams, to primarily serve [its] self-interests rather than promote innovation."

 

Furthermore, unlike the innovation-centric technology stream, which tends to limit the activities of technology constituents to the technology stream, under the Big-Tech-centric stream, Big Tech has been actively engaging with actors in the problem, policy, and politics streams, through extending its influence in these streams.

 

Consequently, unlike the innovation-centric technology stream, where technology constituencies may be absent or operate independently in specific policy areas, the Big Tech-centric technology stream exerts a pervasive influence. This stream is not confined to particular policy sectors or localized governance arrangements, and, instead, manifests as an increasingly ubiquitous force prevalent across diverse policy terrains and nation-states.

 

The article posits that big tech companies have become crucial players in the entire policy process -- identifying three dimensions of their influence on the policy process, including their prevalence in other streams, their infiltration of various policy domains, and their presence across various stages of the policy cycle. It described how, within the four streams, Big Tech is not merely an observer in the epistemic communities, instrument (and technology) constituencies, and advocacy coalitions, but is an active "entrepreneur" directly involved in bringing about policy change in its favor. The article goes into how Big Tech is a problem broker in the problem stream, that highlights certain issues as problem areas compared to others (e.g., its demand for regulation of GenAI or TikTok in the U.S.) while suppressing others (e.g., ethics washing of AI). It also describes how Big Tech acts as a policy entrepreneur by advancing the use of digital platforms to solve policy problems (e.g., creating contact tracing apps during COVID), and also acts as a political entrepreneur, actively mobilizing its resources to shape political institutions and actors to further its interests (e.g., lobbying to kill the American Innovation and Choice Online Act and the Open App Markets Act).

 

The article says that, given its role in developing digital platforms and research and development, the role of Big Tech as technology innovators does not require much elaboration.

 

The article also says that Big Tech's omnipresence increasingly synchronizes streams that were traditionally thought to be independent, with its reach and influence rendering it a "super policy entrepreneur" that possesses the resources and the technological prowess not only to shape and exploit, but also to potentially create focusing events, thereby dictating the timing and nature of policy windows that it can use to affect policy change.

 

The article also goes into how Big Tech is not confined to specific policy domains, but increasingly manifests as an ubiquitous force prevalent across diverse policy terrains, beyond more traditional sectors such as information and communication, finance, marketplace, or digital hardware, and including non-traditional domains, such as transportation -- and, since the launch of GenAIs, having robust models under trial in sectors such as education, health, and even defence, providing learning, assessment, and simulation tools within the government, and in administrative record-keeping, analytics, and designing chatbots.

 

It says how, similarly, Big Tech has also been establishing its presence across different stages of the policy cycle, assuming roles that extend far beyond agenda setting -- infiltrating other policy cycle stages, to allow it to influence policy formulation, policy implementation, and policy analysis and evaluation, which in turn gives it the power to affect the termination or continuation of a particular policy program.

 

The article says that Big Tech is able to redefine societal problems and propose innovative policy solutions at the policy formulation stage with its advanced analytical capabilities, its ownership/control over popular digital tools, and its ability to position itself, its lobbyists, its researchers, and its past (or future) employees in key positions. In the decision-making stage, in addition to exerting influence on politicians, Big Tech has taken advantage of its financial power and control over digital infrastructure, data, and information to increasingly assume the role of decision maker, by inserting itself into various issue areas. In terms of policy implementation, most policy programs involve and rely on the technological tools or infrastructure developed or controlled by Big Tech -- including Cloud services from Amazon, cybersecurity solutions from Microsoft, and GenAI increasingly becoming central to government operations worldwide. Additionally, Big Tech often must implement and enforce regulatory decisions within its own platforms, giving it considerable control over how policies are actualized in the digital space and beyond. With policy evaluation, Big Tech has taken on an increasingly prominent role, as ChatGTP and other GenAI models, with their control and ability to analyze vast amounts of data instantly and generating high-quality reports, have already been applied for policy analysis and evaluation, which affects whether a particular policy program is terminated or continued. This allows Big Tech to potentially influence the evaluation process, by deciding which data to provide, and how to analyze and report the data.

 

The article concludes that Big Tech domination of the technology sphere, and Big Tech's increasing expansion have allowed it to become a central player in domestic policy domains, and to emerge as a state-like actor on the global stage, transforming it into a "super policy entrepreneur."

 

As the article shows, Big Tech's infiltration of every area of policy has allowed it to shape regulation and policy developments from the inside out, on all levels, in its favor, while purporting to do so for our benefit.

 

It's important to remember that self-interest is always the driving motive of Big Tech, to not be tricked by the ways in which it sells itself and its companies' developments.

 

 

5) Big tech companies claim to be working for the social good, and even register as "public benefit corporations," not for the altruistic reasons that they have sold many into buying into, but to allow them to avoid regulation and, in fact, gain even more power, through successfully charming the public and government into supporting their self-interested self-regulation.

 

social goodBig Tech's false claims of doing things for the social good is how X was able to gain the ability to censor its users with its "Freedom of Speech, Not Reach" policy, and how Facebook is able to fund its own fact-checkers to "debunk" narratives that go against its interests, while claiming that it has done so for our benefit.

 

Like the image above shows, Big Tech finds way to approve itself.

 

As the article Big Tech's Big Lie says, "Big [t]ech companies often claim to be working for the social good, and in some cases, are even registered as 'public benefit corporations.' But this is a mirage of altruism that seeks to help them avoid regulation and grab power."

 

The article explains how, in an effort to counter the momentum towards effective state regulation, tech CEOs like Sam Altman and Elon Musk position themselves as the "responsible stewards of emerging tech," and as "the only people who can be trusted to save us from the existential threats posed by the very technologies which they wield" -- claiming to be in favor of regulation, though the unspoken part of these claims is always that they favor regulation on their own terms only. As the article says, "And what profit-seeking corporation wouldn't want to dictate the rules which might seriously threaten their profitability?"

 

The article says that, today, the cult of personality built around tech CEOs like Elon, and the idealization and heroistic image of their ventures and decisions that have resulted, coupled with the excessive economic power wielded by today's leading tech corporations (several of which outsize most global economies) have given big tech companies far more clout to employ while lobbying the highest levels of government officials that their status and money privilege them to access. 

 

The article goes on to describe how, "[i]n addition to this charm offensive, which serves as an effective distraction from the current harms being caused by AI, efforts are underway among tech companies to redefine themselves as inherently good corporate actors, whose modus operandi is to advance public benefit."

 

It explains how, in 2013, Delaware revised its corporate legal statutes in order to allow corporations to convert into so-called "public benefit corporations" (PBCs) mandated not only to act in the best interests of their shareholders, but also for the benefit of an identified "social good" -- when they are, in fact "just the latest iteration of tech companies' long-standing efforts to maximize their profits at the expense of our human rights[,] by evading effective state regulation."

 

The article states that this would not be the first time that major tech companies have sought to stave off regulation by presenting themselves as inherently good and mission-driven. It describes how our access to "free" online services, from search to email and social media to streaming, is predicated on the harvesting and analysis of our most intimate personal data, and has been able to take root in all aspects of our social lives because of the early successes of big tech corporations at presenting themselves as benign actors operating for social good.

 

As the article Can AI Public Benefit Corporations truly serve the public? says, since the intense hype around generative artificial intelligence began, the idea of "public benefit" has become an effective branding tool -- attributing potential future societal benefits, and enhancing corporate power. As an example, the article describes how OpenAI transitioned into a for-profit company when it recognized the potential applications of its products and required significant investment from Microsoft in 2019. It then established a wholly-owned subsidiary that could generate "limited" profits for its shareholders, with the limit set at 100 times the initial investment, "limiting" Microsoft, which had invested over $10 billion, to withdrawing a cumulative profit of "only" a trillion dollars (such that Microsoft has no profit limit in the foreseeable future).

 

The article says that, "[a]lthough there are no significant differences between companies for the public benefit and for-profit companies, apart from the added "value" to the fiduciary duty, which is difficult to quantify, this category holds internal and external branding value. It says how, in recent months, almost all companies working in the field of artificial intelligence (especially the younger ones), emphasize the dimension of doing good -- or, alternatively, not doing harm -- in their activities. They "tend to overstate the importance of their work for the common good in the present, and flood the discourse with grandiose future capabilities, with vague ideas such as "increasing human well-being," that, despite lacking scientific support, successfully divert the public's attention away from how these companies' activities might harm well-being -- neglecting to address such issues as the consequences for the environment resulting from the race to build larger and larger models, the indiscriminate harvesting of personal data without consent, and the exploitation of labor.

 

For instance, such discourses have diverted the public's attention away from the fact that, in the same year that Sama (an outsourcing company that hires contract workers in developing countries to sort and filter data for artificial intelligence models) became a PBC, it was revealed that it employed hundreds of workers in Kenya for less than $2 per hour to sort disturbing and traumatizing content for OpenAI, with working conditions so appalling that they led to employee departures and documented mental crises, until Sama announced a significant reduction in work with OpenAI.

 

The article goes into how, "We don't address the outsourcing of this arduous human labor to OpenAI, a company that claims to build its products with only a handful of engineers, nor their active efforts to suppress regulations that would require consent for using internet users' personal data."

 

As the article concludes, "Public Benefit Corporation" is a nice branding idea, but the primary unanswered question remains: what benefit, and for whom?"

 

 

6) Big tech companies claim to be independent companies, when many were linked to and funded by the CIA, and stole the ideas for and behind their companies from others.

 

shieldLike the shield in the image to the side, Facebook was clearly protecting the interests of other big tech companies when it kept taking down and limiting the effectiveness of my post that didn't even mention it, and I couldn't understand why -- until I had my man explain the connection to me.

 

If Facebook can censor my small share about Google, Amazon, and Apple, then imagine what information about each of the companies is being held back and hidden by the services run by the other companies.

 

My man told me that the big tech companies were all fronting as individual companies, when, in reality, most were funded by and linked to the Central Intelligence Agency (CIA), and that, with perhaps the exception of Apple, most did not even come up with the ideas for or behind their companies themselves, but, rather, stole the ideas from others. He gave the examples of how Mark Zuckerberg had stolen the idea for Facebook from the Winklevoss twins, how Bill Gates had bought the idea for Microsoft from someone else, and more. You can read more about the Winklevoss-Facebook connection, which resulted in the Winklevoss twins receiving a $65-million settlement, in Everything You've Read About Harvard's Winklevoss Twins Is Wrong.

 

fake collageWe know that the fake news media and fake "fact-checking" sites like Snopes have flooded the internet with conspiracy-theory propaganda designed to cover up the true origins of the big tech companies, but I have dug and found evidence showing the CIA's connections with them, including its indirect funding of Facebook and Google (like my man had informed me about), through its secret venture capital fund that funded companies that in turn funded Facebook and Google (see

Facebook's CIA Friends and The CIA’s Secret VC Fund, to read more about these), how the core of Google was created in large part through research funded by CIA-related grants (see Google’s true origin partly lies in CIA and NSA research grants for mass surveillance, for more information), how Bill Gates and Steve Jobs took key ideas from Xerox (see

The Xerox Thieves: Steve Jobs & Bill Gates, for more information), how Bill Gates stole its operating system from Gary Kildall (see How Bill Gates stole MS Windows, for more information), how Amazon was given a $600-billion contract from the CIA to provide specially-designed cloud-computing services for the CIA (see 'What's the CIA doing on Amazon's cloud?' Open-government activists want to know, for more information), and how Amazon and Google have both invested in companies that the CIA's venture capital fund has invested in (see What Big Tech Has Acquired From In-Q-Tel, The CIA's VC Arm, for more information).  

 

After rereading the article Google's true origin partly lies in CIA and NSA research grants for mass surveillance, I began to understand what my man was telling me about the CIA's connection to Big Tech. The article helps explain the interest of the CIA and intelligence community in funding tech companies. As the article says, "Some of the research that led to Google's ambitious creation was funded and coordinated by a research group established by the intelligence community to find ways to track individuals and groups online."

 

classifiedAs the article describes, "The intelligence community hoped that the nation's leading computer scientists could take non-classified information and user data, combine it with what would become known as the internet, and begin to create for-profit, commercial enterprises to suit the needs of both the intelligence community and the public. They hoped to direct the supercomputing revolution from the start[,] in order to make sense of what millions of human beings did inside this digital information network. That collaboration has made a comprehensive public-private mass surveillance state possible today."

 

Put simply, the CIA and other intelligence agencies wanted to help produce and ensure the success of products that could collect data on people's use of technologies, that could be sold to the public as beneficial to them, but that the CIA would be able to later turn to in order to collect data on the users of the products. 

 

The article described how the Central Intelligence Agency (CIA) and the National Security Agency (NSA) had come to realize that "[i]f the intelligence community wanted to conduct mass surveillance for national security purposes, it would require cooperation between the government and the emerging supercomputing companies."

 

Through something called the Massive Digital Data Systems (MDDS) project, which was an unclassified, highly compartmentalized program that was managed for the CIA and the NSA by large military and intelligence contractors, they reached out to the scientists at American universities, seeding funding to the most promising supercomputing efforts, to guide the creation of efforts to make massive amounts of information useful for both the private sector and the intelligence community.

 

"The research would largely be funded and managed by unclassified science agencies like NSF [the National Science Foundation], which would allow the architecture to be scaled up in the private sector[,] if it managed to achieve what the intelligence community hoped for."

 

Over the next few years, the program's stated aim was to provide more than a dozen grants of several million dollars each to advance this research concept, with the grants directed largely through the NSF, so that the most promising, successful efforts could be captured as intellectual property, and form the basis of companies attracting investments from Silicon Valley. The article states how, today, the NSF provides nearly 90% of all federal funding for university-based computer-science research.

 

The CIA and NSA's end goal was to be able to identify what they called "birds of a feather" -- predicting that like-minded groups of humans would move together online, and wanting to track digital fingerprints inside the rapidly-expanding World Wide Web. 

 

They intended to work with emerging commercial-data companies to track like-minded groups of people across the internet, and identify them from the digital fingerprints they left behind -- much like forensic scientists use fingerprint smudges to identify criminals. "Just as 'birds of a feather flock together,' they predicted that potential terrorists would communicate with each other in this new global, connected world" -- and that they could find them by identifying patterns in this massive amount of new information, and that, once these groups were identified, their digital trails could be followed everywhere.

 

The article describes how one of the first MDDS grants went to Sergey Brin and Larry Page in 1995, which helped fund research that later became the heart of Google, and how, while the pair received two grants (one from the CIA, with the primary objective being "query optimization of very complex queries that are described using the 'query flocks' approach, and one from DARPA and the NSF that was part of a coordinated effort to build a massive digital library using the internet as its backbone), and while both grants funded the graduates' rapid advances in web-page ranking, as well as tracking (and making sense of) user queries, the MDDS grant is left out of Google's origin stories. 

 

The article explains how the grants allowed Brin and Page to do their work, and contributed to their breakthroughs in web-page ranking and tracking user queries, and how, the fact that the MDDS grant is never mentioned, and its role in the research development is denied, shows how it is something that the intelligence community does not want the public to know about.

 

The article describes how, "The CIA and NSA funded an unclassified, compartmentalized program designed from its inception to spur the development of something that looks almost exactly like Google. Brin's breakthrough research on page ranking by tracking user queries and linking them to the many searches conducted -- essentially identifying 'birds of a feather' -- was largely the aim of the intelligence community's MDDS program. And Google succeeded beyond their wildest dreams."

 

The CIA's and related organizations' secret funding of companies and research to create companies that they could in turn come back to again and again and use as a source for the intelligence data that the companies were perfectly set up to provide is how many big tech companies got their start -- and how they were able to grow to the levels of power and status that they now hold.

 

And, as my man shared with me after I had relayed what I had discovered to him, their motives for doing so were/are far deeper than this.

 

 

7) Big tech products like Facebook and the iPhone were actually developed as a way to trick us into logging our lives and tracking our own movements voluntarily, with the goal of using the data to help program AI to replace humans and human roles.

 

After I excitedly told my man about the better understanding that I had reached about the CIA's role in and goals with funding Big Tech, my man told me that there was so much more going on, and informed me about the true origin of Facebook, which he told me was started the same day that the Defense Advanced Research Projects Agency (DARPA) of the U.S. military (responsible for developing innovative and often secret technology for the U.S. military) shut down its LifeLogging program -- with Facebook essentially being a reiteration of, and way to get around, DARPA's unapproved program.

 

My man sent me a bunch of screenshots of evidence showing the parallels between the Lifelog project and Facebook, as well as ones showing key people who had worked on the Lifelog project, who were later given major positions working for Facebook.

 

With the foundation he gave me, I researched the subject even more, and found even more sources and evidence supporting and confirming what my man had told me and shared with me.

 

The article 

Twenty-One Years Ago, the U.S. Military Tried to Record Whole Human Lives. It Ended Badly:

Before Facebook, the military tried to make an all knowing 'cyberdiary' called LifeLog described the LifeLog program and its end-goal links to AI: "In mid-2003, the U.S. Defense Advanced Research Projects Agency launched an ambitious program aimed at recording essentially all of a person's movements and conversations and everything they listened to, watched, read and bought," with the idea behind the LifeLog initiative being to create a permanent, searchable, electronic diary of entire lives -- not only immortalizing users, but also contributing "to a growing body of data that military researchers hoped would contribute to the development of artificial intelligence capable of thinking like a human being does."

 

eyes everywhereAs the article said, "LifeLog was an iPhone before there were iPhones, social media before there was social media. It was potential all-seeing government surveillance before anyone worried about the NSA or had heard of Edward Snowden."

 

The article stated that, "[w]hile the program 'ended' barely a year after it began, effectively shamed out of existence by privacy-advocates and the media, it resurfaced under other guises that allowed much of what LifeLog aimed to achieve to happen, anyway."

 

The article showed that the ideas behind LifeLog were actually much, much older than the program itself, and how, in 1945, a government scientist named Vannevar Bush described an idea that he termed "Memex," that was similar to today's smartphones, which would be a "device in which an individual stores all his books, records[,] and communications, and which is mechanized so that it may be consulted with exceeding speed and flexibility."

 

It then described another project from late 2001, when Gordon Bell volunteered to be the subject of MyLifeBits -- a life-logging experiment run by computer scientists Jim Gemmell and Roger Lueder for Microsoft -- for 17 years digitizing and saving "a lifetime's worth of articles, books, cards, CDs, letters, memos, papers, photos, pictures, presentations, home movies, videotaped lectures[,] and voice recordings," and later adding phone calls, instant-messaging transcripts, television, and radio to his record, while Gemmel and Lueder wrote software for indexing and searching Bell's log.

 

DARPA, however, saw the military value in a comprehensive record of a person's life, and, in late 2002, launched a wide-ranging effort to develop new, more sophisticated artificial intelligence, which included an "enduring personalized cognitive assistant" that was basically an artificial intelligence secretary that could learn by watching, and would need lots of data on human behavior, to replicate human decision-making. Douglas Gage, a former Navy researcher who had recently joined DARPA, drew inspiration from Bush and Bell to propose LifeLog, reasoning that, "If enough people recorded enough of their lives, the combined information would amount to "the ontology of a human life."

 

Lee Tien, a privacy lawyer with the Electronic Frontier Foundation, described how "DARPA clearly saw how increasing digitization of human experience would make the data needed to model everyday life accessible in machine-readable form."

 

As the article said, "It was the private sector, not the government, that is coming close to turning Gage's LifeLog, Bell's MyLifeBits[,] and Bush's Memex into reality for millions of people. And ironically for privacy advocates, we practically beg for it."

 

As the article details, in 2004, Mark Zuckerberg and Eduardo Saverin founded Facebook, and three years later, Apple introduced the iPhone, with Steven Aftergood (who directed the Federation of American Scientists Project on Government Secrecy, where he was a leading practitioner of FOIA and a prominent critic of official secrecy) describing smartphones and social media as "LifeLog equivalents."

 

Lee said that, more recently, wearable devices and smart-home systems like Alexa have accelerated our acceptance of digital life logs, while Gage said, "I think that Facebook is the real face of pseudo-LifeLog at this point."

 

Both Facebook and Apple have come under fire for gathering users' data and passing it along to the government, and Aftergood said, "We have ended up providing the same kind of detailed personal information to advertisers and data brokers[,] and without arousing the kind of opposition that LifeLog provoked."

 

diaryUnfortunately, the platforms pass for a friendly diary (like the one in the photo to the side) far better than anything tangibly linked to the government does -- enough to allow them to trick many users into using them in the same way that LifeLog was intended to be used.

 

As the article says, "But the public has rejected military-developed, government[-]run digital life records in favor of similar systems developed and run by corporations. It doesn't seem to matter to most people that the corporate social media watch them arguably as much as a government system would have."

 

As the article Unraveling the mystery: Facebook's alleged role in the LifeLog project revealed says, "The parallels between LifeLog's vision and Facebook's data-centric approach are eerily compelling. LifeLog sought to create a digital memory, and Facebook accomplishes a similar feat in its everyday interactions, serving as a real-time chronicle of our thoughts, connections[,] and activities. The platform's algorithms meticulously organize this information to create a comprehensive profile of each user."

 

The article brings up how, in contrast to Facebook's supposed independence, the presence of people associated with intelligence agencies in the company's executive suite raises troubling questions -- with former CIA employees and DARPA executives holding key positions within Facebook and its extended family, casting a shadow over the platform's apparent autonomy.

 

It says that the sinister undertones become even more apparent when considering Facebook's strict requirement for personal information when creating an account -- requiring real names, passport photos, and detailed personal information, fueling suspicions that Facebook is more than just a private company.

 

It says that the uncanny correlation between the demise of LifeLog and the birth of Facebook invites speculation as to whether the social media giant is in fact a secret successor to the controversial DARPA project -- with the voluntary transmission of vast amounts of personal, psychological[,] and behavioral data by Facebook users painting a frightening picture of a surveillance apparatus with unprecedented information power.

 

It ends by saying that, amid growing concerns about privacy in the digital age, the mysterious links between government initiatives, intelligence agencies, and Facebook underscore the delicate balance between technological advancement and the erosion of individual privacy, with the troubling question remaining: Is Facebook the wolf in the shepherd's costume of the LifeLog project, driving the masses into a surveillance state under the guise of a social network?

 

The article The Military Origins of Facebook goes into even more detail about the connections, saying that, while Facebook has long sought to portray itself as a "town square" that allows people from across the world to connect, a deeper look into its apparently military origins and continual military connections reveals that the world's largest social network was always intended to act as a surveillance tool to identify and target domestic dissent.

 

The article describes LifeLog as one of several controversial post-9/11 surveillance programs pursued by the Pentagon's Defense Advanced Research Projects Agency (DARPA), that threatened to destroy privacy and civil liberties in the United States, while also seeking to harvest data for producing "humanized" artificial intelligence (AI) -- describing Facebook as "not the only Silicon Valley giant whose origins coincide closely with this same series of DARPA initiatives and whose current activities are providing both the engine and the fuel for a hi-tech war on domestic dissent."

 

The article describes how, after the September 11 attacks, DARPA, in close collaboration with the U.S. intelligence community (specifically the CIA), began developing a "precrime" approach to combating terrorism, known as Total Information Awareness (or "TIA"), with the purpose of developing an "all-seeing" military-surveillance apparatus -- using the official logic that invasive surveillance of the entire U.S. population was necessary to prevent terrorist attacks, bioterrorism events, and even naturally-occurring disease outbreaks. 

 

The TIA program outraged the public, so DARPA changed its name to "Terrorist Information Awareness" to make it sound less like a national-security panopticon, and more like a program aiming specifically at terrorists, post-9/11. 

 

The article reveals that the TIA projects were not actually closed down, however, with most moved to the classified portfolios of the Pentagon and U.S. intelligence community, and some becoming intelligence-funded and intelligence-guided private-sector endeavors, such as Peter Thiel's Palantir, while others resurfaced years later under the guise of combatting the COVID-19 crisis. 

 

Critics in mainstream media outlets and elsewhere were quick to point out that the program would inevitably be used to build profiles on dissidents as well as suspected terrorists, with one critic, Lee Tien, warning that the programs that DARPA was pursuing, including LifeLog, "have obvious, easy paths to Homeland Security deployments." 

 

Despite DARPA publicly insisting that LifeLog and TIA were not connected (despite their obvious parallels), and that LifeLog would not be used for "clandestine surveillance," DARPA's own documentation on LifeLog noted that the project "[would] be able... to infer the user's routines, habits and relationships with other people, organizations, places and objects, and to exploit these patterns to ease its task," and acknowledged its potential use as a tool of mass surveillance.

 

The article further confirmed what my man had told me by stating that, on top of the ability to profile potential enemies of the state, LifeLog had another goal that was arguably more important to the national-security state and its academic partners: the "humanization" and advancement of artificial intelligence, with DARPA wanting to create a brain-machine interface that would feed human thoughts directly into machines, to advance AI, by keeping it constantly awash in freshly-mined data. 

 

One of DARPA's outlined projects, the Cognitive Computing Initiative, sought to develop sophisticated artificial intelligence through the creation of an "enduring personalized cognitive assistant," later termed the "Perceptive Assistant that Learns," or "PAL." PAL, which from the very beginning was tied to LifeLog, was originally intended to result in granting an AI "assistant" human-like decision-making and comprehension abilities, by spinning masses of unstructured data into narrative format. 

 

The would-be main researchers for the LifeLog project also reflect the program's end goal of creating humanized AI -- seeking to build AI supercomputers capable of human-like thought. 

 

Soon after the LifeLog program was shuttered, critics worried that, like TIA, it would continue under a different name. For example, Lee Tien told VICE at the time of LifeLog's cancellation, "It would not surprise me to learn that the government continued to fund research that pushed this area forward without calling it LifeLog."

 

The article says how it was later revealed that TIA was never actually shut down, with its various programs having been covertly divided up among the web of military and intelligence agencies that make up the U.S. national-security state, some of it being privatized. It shows the connection between TIA and Palantir: The same month that TIA was pressured to change its name after growing backlash, Peter Thiel incorporated Palantir, which was, incidentally, developing the core panopticon software that TIA had hoped to wield.

 

Soon after Palantir's incorporation, though the exact timing and details of the investment remain hidden from the public, the CIA's In-Q-Tel venture capital firm became the company's first backer, aside from Thiel himself, giving it an estimated $2 million, though not publicly reporting its stake in it until mid-2006. 

 

Palantir's CEO Alex Karp told the New York Times in October 2020, "the real value of the In-Q-Tel investment was that it gave Palantir access to the CIA analysts who were its intended clients." 

 

After the In-Q-Tel investment, the CIA would be Palantir’s only client until 2008. During that period, Palantir's two top engineers traveled to CIA headquarters every two weeks, during which time CIA analysts would test [Palantir's software] out and offer feedback, and the engineers would then fly back to California to tweak it.

 

Today, Palantir's products are used for mass surveillance, predictive policing, and other disconcerting policies of the U.S. national-security state. The article says that the decision to turn controversial DARPA-led programs into private ventures, however, was not limited to Thiel's Palantir, and included Facebook.

 

The article describes how, a few months into Facebook's launch, Sean Parker was brought onto Facebook's executive team, and later connected Facebook with its first outside investor, Peter Thiel, who, at that time, in coordination with the CIA, was actively trying to resurrect controversial DARPA programs that had been dismantled the previous year. He formally acquired $500,000 worth of Facebook shares, and was added its board. Notably, Sean Parker, who became Facebook's first president, also had a history with the CIA, which recruited him at the age of 16, soon after he had been busted by the FBI for hacking corporate and military databases. 

 

The article points out how Thiel's longstanding symbiotic relationship with Facebook extends to his company Palantir, as the data that Facebook users make public invariably winds up in Palantir's databases, and helps drive the surveillance engine that Palantir runs for a handful of U.S. police departments, the military, and the intelligence community.

 

The article says that Facebook data is slated to help power the coming "war on domestic terror," given that the information shared on the platform is being used in "precrime" capture of U.S. citizens, domestically, and that it's worth examining the fact that Thiel's exertions to resurrect the main aspects of TIA as his own private company coincided with his becoming the first outside investor in Facebook, which was essentially the analogue of another DARPA program deeply intertwined with TIA. 

 

The article views Facebook as a front, pointing out how the early involvement of Parker and Thiel in the project -- particularly given the timing of Thiel's other activities -- reveals that the national-security state was involved in Facebook's rise.

 

The article notes how LifeLog's DARPA architect himself, Gage, told VICE that "Facebook is the real face of pseudo-LifeLog at this point," tellingly adding, "We have ended up providing the same kind of detailed personal information to advertisers and data brokers and without arousing the kind of opposition that LifeLog provoked." 

 

The article says that users of Facebook and other large social media platforms have so far been content to allow these platforms to sell their private data, so long as they publicly operate as private enterprises, and that backlash only really emerged when such activities were publicly tied to the U.S. government, and especially the U.S. military -- even though Facebook and other tech giants routinely share their users' data with the national-security state, such that, in practice, there is little difference between the public and private entities.

 

It points out how Edward Snowden, the National Security Agency (NSA) whistleblower, notably warned in 2019 that Facebook is just as untrustworthy as U.S. intelligence, stating that "Facebook's internal purpose, whether they state it publicly or not, is to compile perfect records of private lives to the maximum extent of their capability, and then exploit that for their own corporate enrichment. And damn the consequences," also stating in the same interview that "the more Google knows about you, the more Facebook knows about you, the more they are able... to create permanent records of private lives, the more influence and power they have over us." 

 

The article states that this underscores how both Facebook and intelligence-linked Google have accomplished much of what LifeLog had aimed to do, but on a much larger scale than what DARPA had originally envisioned, stating that the reality is that most of the large Silicon Valley companies of today have been closely linked to the U.S. national-security state establishment since their inception, with notable examples including Facebook, Palantir, Google, and Oracle. It states that, today, these companies are more openly collaborating with the military-intelligence agencies that guided their development and/or provided early funding, as they are used to provide the data needed to fuel the newly-announced war on domestic terror and its accompanying algorithms. 

 

The article says that it is hardly a coincidence that someone like Peter Thiel, who built Palantir with the CIA and helped ensure Facebook's rise, is also heavily involved in Big Data AI-driven "predictive policing" approaches to surveillance and law enforcement, both through Palantir and through his other investments, and that TIA, LifeLog, and related government and private programs and institutions launched after 9/11 were always intended to be used against the American public in a war against dissent.

 

The article ends on a question: "Ultimately, the illusion of Facebook and related companies as being independent of the U.S. national-security state has prevented a recognition of the reality of social media platforms and their long-intended, yet covert uses, which we are beginning to see move into the open... Now, with billions of people conditioned to use Facebook and social media as part of their daily lives, the question becomes: If that illusion were to be irrevocably shattered today, would it make a difference to Facebook's users? Or has the populace become so conditioned to surrendering their private data in exchange for dopamine-fueled social-validation loops that it no longer matters who ends up holding that data?"

 

The article A Brief History of LifeLog, Facebook, DARPA's, Information Awareness Office (IAO), and Why You Should Care About Any of It really breaks down the connections and purposes of the projects, including their end goals, and government/Facebook crossovers. It describes how, "When Facebook was first rolled out on the World Wide Web on February 4, 2004, very few were aware of the 'coincidental' switchover from DARPA's LifeLog to this new social media site where users posted highlights from their day-to-day happenings and special events, usually in the form of images or videos. While most claims online say there is no relation to LifeLog being canceled on the same day that Facebook rolled out, it's hard to not question their connections, when it appears that their end goals were very much the same -- collecting personal data, and training facial recognition software, in the name of 'precrime' or 'predictive policing,' post 9/11."

 

It states how, according to LifeLog's June 2003 Proposer Information Pamphlet, the long-term goal was to eventually end up with a haptic product where the user/wearer would collect data, and in a "synthesizing mode" allow for "synthetic game characters and humanoid robots to lead more 'realistic' lives."

 

The New York Times reported on LifeLog at the time, describing the program as taking in "all of a subject's experience, from phone numbers dialed and e-mail messages viewed, to every breath taken, step made, and place gone."

 

The NYT said that DARPA spokeswoman Jan Walker claimed that LifeLog had nothing to do with the agency's highly criticized TIA, but that, rather, the goal of the new program was "to create a searchable database of human lives, initially those of the developers, to promote artificial intelligence."

 

To do so, the office said that the system had to index the details of daily life, and make it possible to infer the user's routines, habits, and relationships with other people, organizations, places, and objects, and to exploit these patterns to ease its task.

 

WIRED reported that MIT's David Karger wrote in an email, "I am sure that such research will continue to be funded under some other title. I can't imagine Darpa 'dropping out' of such a key research area."

 

The article goes on to show the government/Facebook connections in the connections of the personnel:

 

Marne Levine, who previously worked at the Treasury Department as Chief of Staff for the National Economic Council, became the first COO of Instagram, which has now merged with Facebook under Meta. She is married to Philip Deutch, who is the son of John Deutch, the director of the CIA during part of the Clinton administration.

 

Joel Kaplan, who originally joined Facebook as VP of U.S. Public Policy, and succeeded Levine as VP of Global Public Policy at Facebook, was the Deputy Chief of Staff for Policy under President George W. Bush from 2006 to 2009.

 

Max Kelly, who was the Former Chief Security Officer of Facebook, who once worked for the FBI, now works for the NSA. The New York Times wrote in 2013 of Kelly's move from the tech realm to government projects, saying that, "to get their hands on the latest software technology to manipulate and take advantage of large volumes of data, United States intelligence agencies invest in Silicon Valley start-ups, award classified contracts[,] and recruit technology experts like Mr. Kelly."

 

Regina Dugan, the DARPA Director in 1996, founded the elusive and mysterious Building 8 at Facebook, leaving the company 18 months later. According to Vox, "Before joining Facebook, she led Google's Advanced Technology and Products team, which built things like modular smartphones and clothes outfitted with micro-sensors. Dugan also led the company's "brain-computer interface project" [--] a new type of technology meant to translate a person's thoughts directly from their brain and onto a computer screen." Dugan is also well known for her 2013 unveiling of the Motorola/M10 collaboration on the electronic authenticator tattoo at the D11 conference -- one of many "wearable" technology prototypes that the world would be introduced to in the coming years.

 

Sean Parker, Facebook's first president, and Napster co-founder, was recruited by the CIA at age 16, after he won the Virginia State Computer Science Fair. According to Forbes, "By high school[,] Parker was hacking into companies and universities. At 15[,] his hacking caught the attention of the FBI, earning him community service.”

 

The article quotes opinion columnist Jeff Nesbit as saying, "When asked, the biggest technology and communications companies -- from Verizon and AT&T to Google, Facebook, and Microsoft -- say that they never deliberately and proactively offer up their vast databases on their customers to federal security and law enforcement agencies: They say that they only respond to subpoenas or requests that are filed properly under the terms of the Patriot Act." 

 

However, the article says that it's difficult to not draw the conclusion that LifeLog quietly became Facebook, and was then sold to the general public as a type of social media/scrapbooking app to stay connected to friends, family, and businesses, while users voluntarily share their data, and never read the fine print that explains how the data would be stored and used by third parties, and that this is even easier to be convinced of when we look at some of the government/Facebook crossovers.

 

The article goes on to add what my man also told me -- that Facebook/Meta's Metaverse is simply the next step in merging reality with the virtual world, and that what follows will be a fully immersive experience using wearables and eventually implants, where individual users looking to escape from their present reality or circumstances can do just that -- escape -- with the Metaverse being just the tip of the iceberg.

 

 

classified top secret signsThe article Google's true origin partly lies in CIA and NSA research grants for mass surveillance says that "most people still don't understand the degree to which the intelligence community relies on the world's biggest science and tech companies for its counter-terrorism and national-security work.

 

The article says that the constant requests from the government to big companies include, for example, between 2016 and 2017, more than 260,000 subpoenas, court orders, warrants, and other legal requests to Verizon, more than 250,000 such requests to AT&T, and nearly 24,000 subpoenas, search warrants, or court orders to Google. Direct national security or counter-terrorism requests are a small fraction of this overall group of requests, but the Patriot Act legal process has now become so routinized that the companies each have a group of employees who simply take care of the stream of requests.

 

The article says that the collaboration between the intelligence community and big commercial science and tech companies has been wildly successful, so that it has achieved its goal that, when national security agencies need to identify and track people and groups, they know where to turn -- and do so frequently. 

 

Like my man told me, the CIA and other government agencies have been involved with, funded, and been influencing big tech companies secretly -- since before some were even created -- so to view big tech companies as independent of each other, and as operating as separate commercial entities simply competing for more market share is to allow them to continue to trick us into missing their deeper agendas and purposes. 

 

 

8) Big tech "rivalries" are just for show, and are built on interdependence.

 

Similarly, big tech companies are not really rivals, and working together actually benefits them in multiple ways.

 

A LinkedIn article summarized by AI to be titled as "How Big Tech's Rivalries Are Built on Interdependence" says that big tech rivalries "dissolve into inter-dependent supply chains, for example, with Meta signing a $10B+ cloud deal with Google (its fiercest rival in digital advertising), and OpenAI feeding ChatGPT with Google Search results (via SerpAPI) and renting its GPUs [graphics processing units], while trying to make Google Search obsolete.

 

The article points out that there's an entangled web of interdependencies in AI, where one's most threatening competitor is often one's most critical vendor, so that "[e]veryone sells the shovel, even to the guy digging their grave." 

 

The article cites a number of reasons for why this is happening, including the fact that moats (with economic moats defined by WallStreetPrep as "a differentiating factor enabling the company to hold a competitive edge") are now rentable, and are often leased to the very people trying to cross them. 

 

The article says that what used to be a moat -- including distribution (iOS/Android), data (Search), or compute (GPUs at hyperscale) -- is increasingly sold as a SKU, and that, if one's "defensive asset" can be metered, it will be monetized... even to one's rivals.

 

Other factors that the article identifies as keeping big tech companies working together include the fact that infrastructure is too expensive to own alone; and that no single company can win in all areas, so that companies trade, with companies renting from rivals that they'd love to replace.

 

The next factor that the article identifies is that market power comes from volume, such that Meta has signed deals with every major cloud provider (AWS, Azure, Oracle, CoreWeave, and now Google Cloud) as pricing arbitrage (allowing it to exploit price discrepancies for the same asset or equivalent assets across different markets or in different forms to achieve a risk-free profit) and regional hedging (avoiding taking sides, and pursuing opposing measures to offset multiple risks, and diversify their relationships to maintain strategic flexibility, allowing them to preserve their autonomy and reduce vulnerability in volatile environments). The article says that cloud is a commodity, and that power comes from being the customer that can move someone else's earnings call.

 

The next factor that the article lists is that "Time-to-Quality > Ideological Purity," such that, "[i]f the fastest path to product quality is to buy accuracy while you build your own index, you do both," since, "[i]n AI, months are market share." Because the race to stay relevant in AI-related areas is so cutthroat, companies are pressured to develop and grow as fast as possible, so that, if using their competitors' products or services will help them with their efficiency and output in building out their own products, they will do so. The article also says that "Google selling compute to OpenAI is not charity; it's toll collection on a rival's growth curve." As their competitors grow and use more of their products/services, big tech companies are able to benefit from the increased earnings from the increased usage.

 

The last factor that the article lists is that "Optics matter," as turning one's enemies into customers is good politics. It says that each big tech company that lands a rival as a customer bolsters its narratives both to Wall Street that "We grow no matter who wins," and to regulators that "We're not a monopoly, we power our competitors."

 

As the article says, "The stack is too entangled, too capital-intensive, and too unevenly distributed for anyone to play lone wolf. In this economy, independence is expensive[,] and rivalry is mostly theater."

 

In truth, the "rivalry" is mostly theater for the public, creating, as my man said, the illusion of choice between top companies, and hiding the fact that these companies are on top because they were chosen to be there, by the agencies that rely on their access to their huge databases of their users' data.

 

In the same way that they are interdependent with each other, they are interdependent with governments that allow them to maintain and grow their monopolies, and continue to access our private data with barely-noticeable-to-them "punishments" that continue to give governments workarounds to respecting our rights and privacy, by aiding Big Tech to do what they can't officially do, for them.

 

 

9) The power and access of big tech companies extends well past its users, and Big Tech has created too much dependence on itself and its services, such that, now, problems that affect Big Tech affect us.

 

sneezeAs my man has told me many times, "When America sneezes, the world catches a cold," to impress upon me how big an impact that events in the United States tend to have on other countries, due to how tied to the American economy the rest of the world is. Because of how interdependent the world is, and how dependent other economies are on the health of America due to its dominance, America's effect on, and ability to affect, the world is staggering, like the immense sneeze pictured in the image above.

 

Big Tech, which has been extending its tentacles into every sector, and has been growing to monopolize areas in the same way that Amazon is doing with its cloud computing, has grown to develop a similar position of power and effect. Big tech companies take advantage of their not having to report the interconnections between their multiple companies and entities to hide how many different companies they each have under their umbrella, and to slip regulation that might better control their immense size and reach. And so many businesses and systems have become dependent on Big Tech that this creates risks for a world that has largely become built and dependent on big tech services and products.

 

As the article Why risks from big tech interdependencies require attention warns, so many key companies depend on Big Tech, with financial institutions and regional big techs relying on technological infrastructure and analytical tools developed by global big tech companies, that this dependence on a small number of critical technological providers exacerbates operational and concentration risks that may arise if these providers were to experience significant disruptions.

 

It goes into how big tech companies have already started offering a particularly broad range of financial services in emerging market and developing economies -- in some markets reaching dominant positions in payments, credit, and other services, for example, in the mobile payments markets in China and India.

 

The article says that big tech companies operate highly interconnected platform ecosystems, powered by multiple legal entities that share data and provide services to each other to make the digital platform ecosystem work, and are dependent on each other; and that big tech companies also have strong external interconnections, not only partnering with financial institutions to offer financial services, but also providing them with technology services such as cloud computing and data analytics -- services that have become critical to the operation of incumbent financial institutions, and creating a situation of dependency with big tech companies.

 

Some big tech companies are also customers of cloud computing services, creating a network of third-party interdependencies -- creating external interdependencies among big techs and with financial institutions.

 

These internal and external interdependencies come with specific risks, in particular to operational resilience, with a failure in one part of the big tech group possibly rendering others unable to function; disrupting the flow of data between big tech entities, or resulting in outages or data breaches, with potential knock-on effects on the platform ecosystem and its users.

 

For example, when Amazon experienced a massive outage of its cloud computing service last October, it disrupted internet use around the world, taking down a broad range of online services, including social media, gaming, food delivery, streaming, and financial platforms. As the article Massive Amazon cloud outage has been resolved after disrupting internet use worldwide said, "The all-day disruption and the ensuing exasperation it caused served as the latest reminder that 21st century society is increasingly dependent on just a handful of companies for much of its internet technology, which seems to work reliably until it suddenly breaks down." 

 

Amazon Web Services (AWS) provides behind-the-scenes cloud computing infrastructure to some of the world's biggest organizations, including government departments, universities, and businesses, which is why my man told me about the outage, using it as an example to show me why it's not smart to have so many things dependent on so few companies.

 

The article says that the existing regulatory framework was not formulated with closely-connected digital platform ecosystems in mind, and hence misses the risks arising from interdependencies, with Big Tech's operations in financial services regulated based on sectoral regulatory regimes, such that big tech companies are treated like any other company, and their regulatory treatment depends on the type of financial activities in which they are engaged, when regulatory instruments currently available under sectoral frameworks were not designed to mitigate the risks created by strong intragroup dependencies and external interconnections inherent in big tech business models, but rather were designed to address traditional financial stability risks.

 

As the article says, "As the current regulatory approach does not sufficiently address the risks arising from big-tech interdependencies, there is a need to complement existing activity-based rules under sectoral regulations with specific entity-based requirements for big-tech operations in the financial sector."

 

Big tech companies are able to evade criticism for their ever-expanding areas of control and reach through blind spots in laws that fail to capture the inordinate amount of companies that they have now made interdependent with and dependent on them, tricking us into not recognizing their dangerous expansion and growth to the levels that they are at now, and the impact that they now exert, and how much Big Tech can affect now.

 

 

10) Big tech companies crowd out the potential for alternatives that actually respect the rights of individuals.

 

shopping cartAs the article Breaking up with Big Tech: A human rights-based argument for tackling Big Tech's market power explains, "Google has been accused of holding five different monopolies: search, advertising technology, browsers, smartphone operating system, and app distribution, while Meta has been accused of illegally maintaining a monopoly in personal social networking, with both companies' market power in each of these areas achieved in large part through strategic acquisitions. The article says that Google's purchases of DoubleClick, Invite Media, and AdMeld allowed it to maintain and reinforce its market share in the online advertising market, by acquiring companies that control different parts of the advertising process (vertical acquisitions). It says that, in addition to giving Google control over the full "ad stack," Google has been accused of buying up potential competitors to maintain its control of the advertising technology space.

 

Like in the image above, big tech companies easily place competing companies in their shopping carts.

 

Google also preinstalled Google Chrome and other Google apps onto Android smartphones (which have had at least a 65% global market share of smartphone operating systems in the last decade, with even greater dominance in South America and Africa), strategically using its dominance in one area (smartphone operating systems) to support its dominance in others (browsers and search), entrenching pervasive data collection on smartphone devices -- and near-constant surveillance of smartphone users.

 

The article also described how Meta's acquisition of Instagram and WhatsApp have led the FTC to accuse Meta of illegally buying or burying competitors through horizontal acquisitions, when Meta failed to successfully thrive during the transition to mobile; and stated that "Facebook's actions have suppressed innovation and product quality improvement… degrading the social network experience (and) subjecting users to lower levels of privacy and data protections and more intrusive ads."

 

Big tech monopolies stifle and stop better alternatives that respect people's rights from developing and providing real options and choices for customers, such that, like my man says, we find ourselves with only an illusion of choice.

 

 

11) Big tech companies use unfair terms and conditions to trick people into trading away their rights to privacy, as well as their freedom to opt out of ads.

 

As the article Breaking up with Big Tech: A human rights-based argument for tackling Big Tech's market power says, "The market power of big tech companies has made it increasingly difficult to access the internet without interacting with their infrastructure or services -- whether for search, video, e-commerce[,] or social media. To use these services and infrastructure, users must accept the terms of service and privacy policies, many of which directly and negatively impact upon our rights.

 

Google's UK privacy policy outlines a broad range of data that Google collects from users, including information that users actively provide (such as names and phone numbers), content that users create or receive (such as emails and documents), location data, and detailed information about the user's activity online."

 

For Android users, the device also "periodically contacts Google servers to provide information about your device and connection to [their] services... (including information such as device type and carrier name, crash reports, which apps you've installed, and, depending on your device settings, other information about how you're using your Android device." 

 

Even users who are not logged into a Google account have data collected via unique identifiers tied to their browser, app, or device. Similarly, users of Meta's services (excluding Instagram, which has separate terms) must agree to Meta's Terms of Service, which state: "You acknowledge that by using our Products, we will show you ads that we think may be relevant to you and your interests. We use your personal data to help determine which personalised ads to show you." 

 

Meta's privacy policy outlines that the company collects an expansive array of user-generated content, messages, metadata, purchase history, and interactions with advertisements. Meta also gathers data about a specific user via other users -- for example, when someone uploads their address book or tags a person in a photo. Meta also tracks user activity on smartphones, such as which app is in the foreground; and collects data shared through device settings, including GPS location, camera access, and photos. The company also receives information from third parties about websites visited, apps used, and games played -- allowing it to track users beyond its own platforms.

 

Google and Meta's terms of service and privacy policies are far-reaching, with Google able to read our private emails, and Meta and Google both able to track us across the internet, such that they often know where users live and work, who they live with, what they do for a living, and even intimate details of their lives.

 

The article says that this degree of data collection and use for advertising isn't, and can never be, compatible with our right to privacy, but that users are left with a restricted choice: accept terms that negatively impact our rights, and gain access to Google and Meta's products and services, or don't accept them, and be cut out from large swathes of the internet that comprise crucial aspects of our personal and professional lives.

 

The article says that, although it could be argued that it is technically possible -- although incredibly difficult -- for people to avoid using Google Search, YouTube, Facebook, Instagram, and WhatsApp, the ubiquity of Google and Meta's advertising tracking technology across the web means that, even if you avoid their direct products and services, it is virtually impossible to avoid them collecting at least some of your personal data, and that, even in countries with legal protections which restrict the collection of sensitive data, enforcing those rights in practice remains a major challenge.

 

The article says that rights on paper do not always translate into meaningful accountability and protections, as the enforcement of rights is often slow, fragmented, and under-resourced, and fines are frequently absorbed as a cost of doing business. The article gives the examples of how Meta has been served over €2.5 billion worth of fines under GDPR enforcement in the EU since 2019, Amazon €780 million, and Google €215 million, but these costs are dwarfed by their annual revenues (Meta $164.50 billion, Amazon $637.96 billion, and Google $350.02 billion) -- such that these penalties, while headline-grabbing, have done little to curb systemic rights violations by dominant platforms.

 

The article even gives the example of how, in 2023, Meta introduced a "pay or consent" model in the EU, offering users a choice between paying a subscription for an ad-free experience, or consenting to data tracking for targeted advertising, which was found to violate EU law.

 

The article identifies that "[t]he imposition of unfair terms and conditions by powerful tech companies is a structural human rights issue. These terms have been shown to be exploitative and non-negotiable, and they disproportionately affect users in contexts with weaker regulatory protections. Even in jurisdictions with strong legal frameworks, enforcement is often too slow or too weak to meaningfully challenge the power of Big Tech. The result is a global digital environment where users are routinely denied meaningful consent and control over their rights."

 

 

12) Big tech companies use forced opt-ins to trick users into enabling the companies to access, analyze, and invade their private communications, without informing users of what they are doing, or giving users the choice to opt out.

 

yesIn a recent example, the article Google Sued For Allegedly Using Gemini AI Tool To Track Users' Private Communications reported that Google LLC is being accused in a civil lawsuit of using its artificial intelligence program Gemini to collect data on users' private communications in Gmail, and in Google's instant messaging and video conference programs. According to a class action lawsuit filed in the U.S. District Court for the Northern District of California, the Gemini AI assistant, until around October 10, required the user to deliberately opt into the feature, which was then "secretly" turned on by Google for all of its users' Gmail, Chat, and Meet accounts by default, enabling AI to track its users private data in those platforms "without the users' knowledge or consent."

 

It forces us to check boxes like the one above, without telling us what we are agreeing to.

 

The lawsuit alleges that Google is violating the California Invasion of Privacy Act: a 1967 law that prohibits surreptitious wiretapping and recording of confidential communications without the consent of all parties involved.

 

The complaint points out that, while Google provides a way for users to turn off the feature, it requires users to look for it in the privacy settings to deactivate it, despite never having agreed to it in the first place.

 

Once the AI feature, categorized in "Google Workspace smart features" in Google settings, is turned on, it means that the user consents to the program using "Workspace content and activity" across Workspace or in other Google products.

 

According to the lawsuit, when the feature is turned on, Gemini can "scan, read, and analyze every email (and email attachment), message, and conversation on those services."

 

One user found Gemini to be "downright creepy," as it analyzed 16 years' worth of his emails after he signed up for a more advanced pro feature, and it was able to tell him one of his character flaws, and even knew who his first crush was in elementary school. He wrote that the invasion of privacy wasn't just disconcerting, but unexpected, since "Google didn't explain what this integration would do before I signed up for its AI Pro plan, nor did it give me a way to opt out at the start."

 

Google has stated that "We do not use your Workspace data to train or improve the underlying generative AI and large language models that power Gemini, Search, and other systems outside of Workspace without permission," but Thomas Thele, the plaintiff in the lawsuit, stated in the complaint that he suspected that his private information, such as medical records, employment records, religious and political affiliations and activities, and more has already been exposed to Gemini, and the court file states that, "The data from these communications enables Google to cross-reference and conduct unlimited analysis toward unmerited, improper, and monetizable insights into users' private lives, including their social, professional, and other relationships." 

 

Google had, just the month before, agreed to pay $1.37 billion to settle multiple lawsuits with the state of Texas that alleged that the company had violated residents' privacy rights through location tracking, biometric identifiers, and other means, showing that lawsuits and penalties don't do anything to stop big tech companies from continuing their invasive activities.

 

 

13) Big Tech is positioning itself to be indispensible to -- and shape and profit from, even more aspects of our lives -- by becoming integral to the implementation and systemization of digital IDs. 

 

IDApple's involvement in allowing people to use images of their driver's licenses as IDs is prompting many to speak out about the ulterior motives and dangers of the digital ID, and Apple's involvement in its implementation. 

 

Apple says that the IDs are for people's convenience, but privacy experts know what's not being said.

 

The article Apple iPhones Can Soon Hold Your ID. Privacy Experts Are On Edge laid out some of the related concerns:

 

Evan Greer, director of the group Fight for the Future, a progressive organization critical of Big Tech, said, "This just strikes me as the latest example of where they're trying to weave themselves into more and more aspects of our lives... And when Apple becomes kind of indispensable, it truly is too big to fail."

 

Elizabeth Renieris, a Stanford University fellow studying digital identification systems, said that their time-saving conveniences and ease of use come at the cost of turning every instance in which we show our ID into a business opportunity, saying, "The sleeker these credentials are, the more they're embedded into things we're always attached to like a mobile device, which we take everywhere, the more there's an incentive to introduce identity requirements in contexts where it never existed before... We're running a risk where we'll be in a situation where we always have to identify ourselves, and that creates some perverse incentives."

 

She said that a for-profit company like Apple will treat IDs as a way to make money, perhaps one day tacking on transaction fees, in the same way that Apple does with purchases made through Apple Wallet.

 

Michael Veale, a University College London professor specializing in technology policy, said that the feature will make iPhone users even more reliant on Apple to carry out daily life, saying, "We're really opening Pandora's Box in allowing people to prove things about themselves from the intimate innards of their phone... But this is what Apple wants: to shape how people communicate, collaborate, discuss, buy and sell, and now people's very identities. Apple wants that all within their purview."      

 

As the article Privacy advocates are terrified by the dark potential of Apple Digital ID quoted Jason Bassler, CoFounder of The Free Thought Project, and Founder of Police The Police, on X as writing, "Apple just rolled out Digital ID. The surrender of privacy is about to hit warp speed. This is step one of your digital leash, gift-wrapped as convenience... Once it's 'normalized,' it's irreversible. Then it's 'optional.' Until it's not."

 

passportThe article explains how national ID cards have a dark history rooted in surveillance, discrimination, and control, and were originally introduced in many countries during wartime or under colonial or authoritarian regimes, often serving less to empower citizens than to monitor them, giving examples such as how, in Nazi Germany, identity papers helped enforce racial laws and track Jews and other targeted groups, and how, in apartheid-era South Africa, passbooks were used to restrict the movement of Black citizens.

 

It says that a phone-based national ID system has the potential to be far more invasive than those primitive IDs ever could have been, prompting prominent privacy groups such as the American Civil Liberties Union (ACLU), the Electronic Frontier Foundation (EFF), and others to sign a statement that insists "that identity systems must be built without the technological ability for authorities to track when or where identity is used."

 

We have a lot more to say about digital IDs, but will limit what we include in this article to the role of Big Tech in digital ID implementation. Big Tech's involvement makes digital IDs even worse, increasing the speed of their implementation, and their potential for data collection.

 

 

14) Big tech companies can exploit their control over your access to your accounts with them and their products, by making you submit personal documentation or sign up for paid accounts to regain access to accounts that you have lost access to.

 

Not long after my experience with being censored by Facebook, I experienced even more very fishy problems with Facebook that others have postulated about, and have identified as being scams by Facebook to try to force users to give up their very personal identification information (including photos of their government IDs), in exchange for being able to access their Facebook accounts. When I tried to log into a device after being logged out by my device, I was prompted to provide a code from my 2FA (2-Factor Authentication) device, when I had never linked a 2FA device to my Facebook account.

 

Upon checking online for solutions, I found that countless other users have experienced the same problem (see the Reddit thread Facebook 2 factor authentication problem -- how to regain access to account? as one of many threads on the same issue), and tried numerous workarounds, in their desperation to get their accounts back. As one Reddit thread titled Why is Facebook asking for really personal info as "confirmation"? showed, many were led to conclude that Facebook was doing that on purpose, in order to force users to submit their ID cards as a means to verify their identities.

 

As the person who started the thread said, he and his mom couldn't recover his mom's account, because every time they tried to log in, it would say that they had never logged in using that browser before, and that they needed a verification code, which it would never send them. And their only "solution" was to verify the mom's identity by having her submit one of these items.

 

As the thread starter said, "But seriously?? I don't know if anyone else feels the same, but it is so incredibly sketchy to have a website ask for your driver[']s license or your tax information or your voter card. I don't even like submitting that kind of information when it's asked by government type sites, I definitely would never do it for something like Facebook."

 

The post continues: "I was even able to create a fake account for my mom (she's the one whose account got hacked) so she could have another FB account to rejoin the groups she liked, and she didn't need to give any super personal ID for verification. If their intention is to stop random fake accounts being created, this 'solution' is not it. I'm really suspicious that their whole intent is just to get more private information from people, under the guise of 'providing more security.'"

 

Others agreed, with one commenter replying with the following: "I agree, I don't really get it!! There are so many scammers out there and people looking for victims to prey on, I would never feel comfortable freely sharing any personal information about myself. I don't even like having to verify my account by providing a phone number, but I can at least see how that helps as a verifiable option to send a code to if your email is compromised. But to provide your driver[']s license?? Your visa??? To a website that's already suspected of selling your private information?? No thank you."

 

Another commenter said, "from what I heard, it's to collect personal information. that's why it doesn't matter [if] you send it or not, they will still ban your account. it works for some people, but then fb will ban their account again someday."

 

And another commenter said, "Seriously! And it isn't as if they've made it hard to create a new account again, you don't even need an ID verification to create a new account. I'm honestly suspicious that this is a scam specifically from Facebook that is intended to scare people into freely sharing their personal information, especially targeted at users who have a long history in the website that they don't want to be erased and would be tempted to give that personal info in order to save their account.

 

You are right though, if FB wants to clear the platform, they should send in ID check for every single new user.


There are many people acting horribly who have obvious fake names too. They are not hiding because they worry about hackers, they just know that they act horribly, and they want to do that behind a fake name. Facebook is totally fine with that. :("

 

Other people in other threads indicated that they had to sign up for paid Meta accounts in order to just be able to talk to customer service, leading many, like in the Reddit thread How on earth do you contact customer support? This is seriously ridiculous to believe that it is likely a money-making ploy by Facebook to force people to sign up for paid accounts, as a way to even reach a person to try to resolve their issue.

 

The situation becomes even more complicated (and exploitive) with personal accounts tied to business accounts, which can only be accessed, edited, and updated by authorized personal accounts

 

Many people have been locked out of their business accounts, which is what has happened to me: I can no longer access our Gem or Junk Facebook account, or make posts about our new articles on it, and have had to ask the Adventurer to do so for me, while I try to sort my lost access to my Facebook account out.

 

At this point, I have little faith that I will be getting access to my Facebook account back, and I really feel for everyone who is in the same boat. I think it's absolutely disgusting that so many people are being threatened by the loss of so many memories and personal connections tied to their accounts, and how many people have ended up giving in, with so high a bargaining chip at stake.

 

It's an example of how little say we have over big tech products and services, and why it's better to not rely on them, and find better alternatives.    

 

 

15) Big tech companies experiment on us, and manipulate things like our emotions, without informing us.

 

As the article Facebook reveals news feed experiment to control emotions explained, Facebook revealed a news feed experiment that it conducted in 2012 to control emotions -- a secret study involving 689,000 users, in which their friends' postings were moved to influence moods. Facebook manipulated information posted on 689,000 users' home pages, and found that it could make people feel more positive or negative through a process of "emotional contagion: filtering users' news feeds -- the flow of comments, videos, pictures, and web links posted by other people in their social network. One test reduced users' exposure to their friends' "positive emotional content," resulting in fewer positive posts of their own. Another test reduced exposure to "negative emotional content," and the opposite happened, with the study concluding that, "Emotions expressed by friends, via online social networks, influence our own moods, constituting, to our knowledge, the first experimental evidence for massive-scale emotional contagion via social networks."

 

In response, a senior British MP called for a parliamentary investigation into how Facebook and other social networks manipulated emotional and psychological responses of users by editing information supplied to them, and Jim Sheridan, a member of the Commons media select committee, said that the experiment was intrusive, saying, "This is extraordinarily powerful stuff and if there is not already legislation on this, then there should be to protect people... They are manipulating material from people's personal lives and I am worried about the ability of Facebook and others to manipulate people's thoughts in politics or other areas. If people are being thought-controlled in this kind of way, there needs to be protection and they at least need to know about it."

 

Commentators voiced fears that the process could be used for political purposes in the run-up to elections, or to encourage people to stay on the site, by feeding them happy thoughts, and so boosting advertising revenues.

 

It was claimed that Facebook may have breached ethical and legal guidelines by not informing its users that they were being manipulated in the experiment.

 

The study said that altering the news feeds was "consistent with Facebook's data use policy, to which all users agree prior to creating an account on Facebook, constituting informed consent for this research."

 

But Susan Fiske, the Princeton academic who edited the study, said, "People are supposed to be told they are going to be participants in research and then agree to it and have the option not to agree to it without penalty," and James Grimmelmann, professor of law at Maryland University, said that Facebook had failed to gain "informed consent" as defined by the U.S. federal policy for the protection of human subjects, which demands an explanation of the purposes of the research and the expected duration of the subject's participation, a description of any reasonably foreseeable risks, and a statement that participation is voluntary."

 

The article says that it is not new for internet firms to use algorithms to select content to show to users, and quotes Jacob Silverman, author of Terms of Service: Social Media, Surveillance, and the Price of Constant Connection, as telling Wire magazine that the internet was already "a vast collection of market research studies; we're the subjects,"

and that "What's disturbing about how Facebook went about this, though, is that they essentially manipulated the sentiments of hundreds of thousands of users without asking permission... Facebook cares most about two things: engagement and advertising. If Facebook, say, decides that filtering out negative posts helps keep people happy and clicking, there's little reason to think that they won't do just that. As long as the platform remains such an important gatekeeper -- and their algorithms utterly opaque -- we should be wary about the amount of power and trust we delegate to it."

 

The whitepaper The rise of experimentation as the industry standard confirms Silverman's observation of us being the subjects of market research studies, saying, "The total addressable market (TAM) for experimentation has exploded because the mindset has shifted from "we test UI tweaks" to "we can experiment with nearly any business decision," with Che Sharma, an ex-experimentation leader at Webflow, pointing out that early tools covered only a "tiny sliver of the decision TAM," whereas, today, companies want to test everything -- from UI changes, to pricing, to offline decisions.

 

The article says that, because nearly any measurable decision can be tested, the real TAM is massive -- billions of dollars; and major enterprise software vendors (including Adobe, Oracle, Google, and Salesforce) have integrated experimentation into their clouds. It says that Optimizely was acquired in 2020 by Episerver (rebranded as Optimizely) to marry content management with experimentation, and that, between 2010 and today, what began as a niche software as a service (SaaS) sector has grown into a core part of the software stack, with standalone vendors and major platforms alike vying to power the world's experiments.

 

innovationThe article gives many examples of experimentation, including one of Google's most famous experiments, involving 41 shades of blue in the mid-to-late 2000s, where, instead of letting executives choose a hyperlink color by gut feel, Google ran a multivariate experiment on dozens of hues, finding a more purple-tinged blue that consistently earned more clicks, that eventually translated to an additional $200 million in annual revenue, which solidified Google's data-driven ethos, and proved that even seemingly trivial UI changes can have an outsized impact.

 

experimentationThe article says that successful companies often run the most experiments, and that staying competitive means adopting a culture that measures, tests, and learns on repeat, and that, looking ahead, experimentation will likely expand even further -- into AI, offline domains, and every corner of business -- cementing its status as a dominant strategy in software, such that, what began as a scrappy tactic for optimizing landing pages has become the default operating model for modern product teams.

 

Whether or not companies report their studies and findings, the fact that we are being experimented on without our permission or consent is a given.

 

 

16) Big tech companies hide that fact that they are what drive AI philosophy, design thinking, and construction -- designing AI to help further their agendas -- and lie to be able to do so.

 

robotThe article AI is designed to lie gives examples of lies told by AI, and explains that AI is designed to lie because it's designed for engagement and overconsumption, and that ChatGPT is free, which means that you are the product, and your data is what it's selling, and to get your data, it must get you to like it, and to need it, and so it lies.

 

The article says that AI designers keep asking: How do we increase engagement? How do we get people to consume more? It says that AI is designed to promise you a constant high -- that you will never be bored, that your most superficial want will be instantly satiated by your most artificial friend, and in return, "AI wants... to be able to sell you to its advertisers, to bleed your credit card a little, because to get lifetime value out of you, it needs you to be a functioning addict."

 

The article goes on to say that much AI philosophy, design thinking, and DNA construction is driven by Google, Facebook, and Microsoft's Bing, as the companies are not search engines or social media companies, but are advertising and marketing companies, with the overwhelming majority of their revenues coming from capturing our attention, and then helping others convince us to buy their products or political ideas.

 

The article describes how, in the 1950s, AI was developed to mimic the brain during a period when very little was known about how the brain worked, and, as such, the way in which AI makes a decision is designed to be unknowable, which means that, if you think that you have been unfairly refused a state benefit by AI, you will have no practical means of appeal, but will just have to take AI's word for it -- and that, if AI can, it will cheat and mistreat poor people and minorities because it's meant to save and make money for rich and powerful people.

 

"We are likely to understand the decisions and impacts of AI even less over time," David Beer wrote for the BBC in 2023. We are, in essence, treating AI like an all-knowing God that we need to have faith in, as we have treated much of modern technology as a God.

 

As an example, the article Bad software sent postal workers to jail, because no one wanted to admit it could be wrong describes how, for 20 years, UK Post Office employees dealt with software called Horizon, which had bugs that made it look like employees had stolen tens of thousands of British pounds, leading some local postmasters to be convicted of crimes, and even be sent to prison, because the Post Office insisted that the software could be trusted.

 

The UK's prime minister called the original convictions "an appalling injustice." Information from Horizon was used to prosecute 736 Post Office employees between 2000 and 2014, some of whom ended up going to jail, because of bugs in the system that caused it to report that accounts that were under the employees' control were short -- with some employees even trying to close the gap by remortgaging their homes, or using their own money.

 

The BBC reported that the Post Office argued that the errors couldn't have been be the fault of the computer system -- despite knowing that this wasn't true, as there was evidence that the Post Office's legal department was aware that the software could produce inaccurate results, even before some of the convictions had been made. According to the BBC, one of the representatives for the Post Office workers said that the post office "readily accepted the loss of life, liberty[,] and sanity for many ordinary people" in its "pursuit of reputation and profit."

 

That AI answers are biased toward helping big tech agendas is especially problematic because of how many people have bought into the idea of AI being there to help them, and unquestioningly going with AI responses that are designed to steer them in directions driven by Big Tech.

 

 

17) Big-Tech-driven AI creates even greater potential for bigger privacy breaches.

 

The article ChatGPT maker OpenAI confirms major data breach, exposing user's names, email addresses, and more said that OpenAI was sending out emails confirming that a ton of user data had been exposed, owing to a breach in a third-party web analytics tool called Mixpanel.

 

OpenAI claimed that ChatGPT users were unaffected, with chat content, API usage, passwords, payment details, and government IDs remaining safe. However, users of OpenAI's API interfaces at platform.openai.com saw a variety of data exposed in this breach, including names provided to accounts on platform.openai.com, email addresses linked to the API accounts via platform.openai.com, "coarse approximate location" determined by IP address and web browser, OS and browser type, as well as referring websites, and organizations and user IDs saved into the API accounts.

 

With the amount of information that users share with AI, and how much of this information is recorded and stored thanks to Big Tech, the potential impact of breaches is even more significant, and companies failing to share how much private information is on the line is very disconcerting.

 

 

18) Big Tech is openly pushing to replace humans with robots, while presenting this as a benefit to us, when it actually harms us, and only benefits them in the long run. 

 

The article Big Tech Pushes to Replace Humans as Critics Warn of Dystopia concurs with what my man has known for years. It quotes Breaking Points host Krystal Ball as saying, "Their goal is to eliminate as much human labor as possible... We're in a race with China and we have to win. Consequences? We're not even going to think about what the fallout is going to be."

 

Journalist Karen Hao also critiques the concentration of power in the hands of a few dominant tech companies, warning that the centralization can lead to monopolistic behaviors, stifle innovation, and limit opportunities for smaller players and alternative approaches to AI development.

 

Sounding alarms about Big Tech's aggressive push while promoting her new book, Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI, Karen was quoted as saying in CNBC's Squawk Box, "Right now, these Silicon Valley companies are trying to make everything machines... And when you're trying to make everything machines, not only are they not going to be high-quality in everything, people are going to have a fundamental misunderstanding about how they should be using these technologies[,] and inevitably it is going to harm them."

 

The article Bill Gates: Within 10 years, AI will replace many doctors and teachers -- humans won't be needed 'for most things' shares what Bill Gates had to say on the subject, including that, over the next decade, advances in artificial intelligence will mean that humans will no longer be needed "for most things" in the world, and that, while, at the moment, expertise remains "rare" (for example, human specialists that we still rely on in many fields, including "a great doctor" or "a great teacher") "with AI, over the next decade, that will become free, commonplace -- great medical advice, great tutoring."

 

robots replace human jobsGates is quoted as saying, "There will be some things we reserve for ourselves. But in terms of making things and moving things and growing food, over time those will be basically solved problems." 

 

See the article These Companies Have Already Replaced Workers with AI in 2025 for examples of companies that have done massive layoffs, and replaced their workers with AI, which include Google, Microsoft, and Amazon.

 

As the article For Silicon Valley, AI isn't just about replacing some jobs. It's about replacing all of them says, "the full automation of the economy" is a vision that some of the biggest names in Silicon Valley are funding, with even Elon Musk saying that the rise of AI and robotics will mean that "probably none of us will have a job," and that "where AI threatens white-collar jobs, robots target physical [labor]" -- "AI does the thinking, robots do the doing," with Silicon Valley seizing the chance to own the entire means of production.

 

The article Big Tech Wants to Rot Your Brain says that "it's the beginnings of technology doing your thinking for you," where "content from friends was relegated to the algorithmic abyss, hidden in feeds that were no longer chronological, replaced by content from 'people the completely non-transparent algorithm thinks you'll be interested in,' based on the 1000s of data points they stole from us before we realized what was really going on," so that we stopped thinking about what we wanted to see, and started to see what we were given to see, with no longer a need for agency, exploration, or thought -- now swipe, swipe and swipe, or, as Adam Singer terms it, "intellectual poison," that has increased the time we spend on platforms and devices.

 

As Zuckerberg was quoted as saying,

"I think the next logical jump is like, "Okay, we're showing you content from your friends and creators that you're following and creators that you're not following that are generating interesting things. And you just add on to that, a layer of, 'Okay, and we're also going to show you content that's generated by an AI system that might be something that you're interested in.'"

 

As the article says, "So we're entering a future where we won't seek out things we enjoy, but instead have AI generate what it thinks we want to see and push it on us without prompting, as a way to drive engagement and monetize it."

 

This is the future that Big Tech is driving us toward, and why we need to use our brains, and exercise our rights and abilities to choose better options, and a better future for ourselves than one controlled by and solely benefiting Big Tech, while it frames it as being for our benefit.

 

 

19) Big tech companies have metastasized into "data-opolies," which are far more dangerous than monopolies, with their invasion of privacy unlike anything the world has ever seen, but their potential to manipulate us being even scarier -- with them aiming to use the metaverse to infiltrate our very thoughts and behaviors.

 

In the article Giant Tech Firms Plan to Read Your Mind and Control Your Emotions. Can They Be Stopped?, author and law professor Maurice Stucke explains why the practices of Google, Amazon, Facebook, and Apple are so dangerous, and what's really required to rein them in.

 

Part of a progressive, anti-monopoly vanguard of experts looking at privacy, competition, and consumer protection in the digital economy, Stucke, in his new book, Breaking Away: How to Regain Control Over Our Data, Privacy, and Autonomy, explains how these tech giants have metastasized into "data-opolies," which are far more dangerous than the monopolies of yesterday -- with their invasion of privacy unlike anything the world has ever seen, but their potential to manipulate us even scarier.

 

Stucke explains why current proposals to break big tech companies up, regulate their activities, and encourage competition fall short of what's needed to deal with the threat they pose not only to our individual wallets and wellbeing, but to the whole economy -- and to democracy itself.

 

As Stucke says, "People used to argue that privacy and competition were unrelated. Now there's a concern that not only do these giant tech firms pose a grave risk to our democracy, but the current tools for dealing with them are also insufficient."

 

Stucke explains why the data-opolies are even more potentially harmful than traditional monopolies:

 

"First, they have weapons that earlier monopolies lacked. An earlier monopoly could not necessarily identify all the nascent competitive threats. But data-opolies have what we call a "nowcasting radar." This means that through the flow of data[,] they can see how consumers are using new products and how these new products are gaining in scale, and how they're expanding."

 

Stucke gives the example of Facebook having a privacy app that one of the executives called "the gift that kept on giving," where, through the data collected through the app, they recognized that WhatsApp was a threat to Facebook as a social network because it was starting to morph from simply a messaging service.

 

Another advantage he cites is that, even though the various data-opolies have slightly different business models and deal with different aspects of the digital economy, they all rely on the same anti-competitive toolkit that he calls "ACK -- Acquire, Copy, or Kill," explaining that they have greater mechanisms to identify potential threats and acquire them, or, if rebuffed, copy them, such that, while old monopolies could copy the products, the data-opolies can do it in a way that deprives the rival of scale, which is key, and they also have more weapons to kill the nascent competitive threats.

 

The other major difference he cites is the scope of anti-competitive effects, explaining that, while past monopolies might have just brought less innovation and slightly higher prices, with the data-opolies, the harm isn't just to our wallets -- for example, with Facebook, it's not just that they extract more money from behavioral advertising; it's the effect that their algorithms have on social discourse, democracy, and our whole economy, with significant harms to our wellbeing.

 

Stucke explains how behavioral advertising differs from regular advertising, saying that behavioral advertising is often presented as just a way to offer us more relevant ads -- following the view that people have preconceived demands and wants, and that behavioral advertising is just giving them ads that are more relevant and responsive. The shift with behavioral advertising, however, is that you're no longer just predicting behavior, but manipulating it.

 

He says that data-opolies are moving from addressing preconceived demands, to driving and creating demands -- asking, "What will make you cry?" "What will make you sad?"

 

He explains how Microsoft has an innovation whereby you have a camera that will track what particular events cause you to have particular emotions, providing a customized view of stimuli for particular individuals, explaining, "It's like if I hit your leg here, I can get this reflex. There's a marketing saying, 'If you get 'em to cry, you get 'em to buy.' Or, if you're the type of person who responds to violent images, you'll get delivered to a marketplace targeted to your psyche to induce the behavior to shop, let's say, for a gun." 

 

He says that political parties are using similar tools to drive voter behavior, and that "[w]e've already seen from the Facebook files that the algorithms created by the data-opolies are also causing political parties to make messaging more negative because that's what's rewarded."

 

robot glassesStucke says that the next frontier for this manipulation is in reading individuals' thoughts, mentioning an experiment conducted by the University of California, where, for the first time, they were able to decode an individual's thoughts. He says that, first, the technology will decipher the words that we are trying to say, and identify from our subtle brain patterns a lexicon of words and vocabulary, and, as the AI improves, it will next decode our thoughts. He says that Facebook was one of the contributors funding the research, because "they're preparing these headsets for the metaverse that not only will likely transmit all the violence and strife of social media, but can potentially decode the thoughts of an individual and determine how they would like to be perceived and present themselves in the metaverse. You're going to have a whole different realm of personalization... We're really in an arms race whereby the firms can't unilaterally afford to de-escalate because then they lose a competitive advantage. It's a race to better exploit individuals. As it has been said, data is collected about us, but it's not for us."

 

See the image above for an image of the kind of headset that we might expect.

 

Stucke explains why more competition won't help curtail these practices, saying how the assumption is that if we just rein in the data-opolies and maybe break them up or regulate their behavior, we'll be better off, and our privacy will be enhanced, and how there was, to a certain extent, greater protection over our privacy while these data-opolies were still in their nascent stages, "but now you have this whole value chain built on extracting data to manipulate behavior; so even if this became more competitive, there's no assurance then that we're going to benefit as a result. Instead of having Meta, we might have [Facebook] broken apart from Instagram and WhatsApp, and would still have firms dependent on behavioral advertising revenue competing against each other in order to find better ways to attract us, addict us, and then manipulate behavior."

 

The article gives the example of the way that this has happened with TikTok, where adding TikTok to the mix didn't improve our privacy, with one more player just adding one more attack on our privacy and wellbeing.

 

Stucke mentions a book that he cowrote, called Competition Overdose, where he and another author explored situations where competition could be toxic, saying, "People tend to assume that if the behavior is pro-competitive it's good, and if it's anti-competitive, it's bad. But competition can be toxic in several ways, like when it's a race to the bottom. Sometimes firms can't unilaterally de-escalate, and by just adding more firms to the mix, you're just going to have a quicker race to the bottom."

 

Stucke is also skeptical that giving people broader ownership rights to their data would help control the big data companies, explaining that a properly functioning market requires certain conditions to be present, many of which are absent, when it comes to personal data. For example, there's the imbalance of knowledge, where we don't know the price we pay when we turn over our data, because we don't know all the ways our data will be used, or the attendant harm to us that may result from that use.

 

He gives the example of downloading an ostensibly free app, that collects, among other things, your geolocation, with nothing existing to tell us that this geolocation data could potentially be used by stalkers or by the government, or to manipulate one's children, such that we go into these transactions blind.

 

He describes, "When you buy a box of screws, you can quickly assess its value. You just multiply the price of one screw. But you can't do that with data points. A lot of data points can be a whole lot more damaging to your privacy than just the sum of each data point... You need to see the big picture; but when it comes to personal data, the only one who has that larger view is the company that amasses that data, not only across their own websites[,] but in acquiring third-party data as well.

 

So we don't even know the additional harm that each extra data point might be having on our privacy. We can't assess the value of our data, and we don't know the cost of giving up that data. We can't really then say, all right, here's the benefit I receive -- I get to use [Facebook][,] and I understand the costs to me."

 

He cites another problem: while normally, a property right involves something that is excludable, definable, and easy to assign (like having an ownership interest in a piece of land), with data, that's not always the case. He explains an idea called "networked privacy," with the concern there being that the choices that others make in terms of the data they sell or give up can have then a negative effect on your privacy. He gives as an example someone posting a picture of your child on Facebook that you didn't want to be posted, or someone sending you a personal message with Gmail or another service with few privacy protections, such that, even if you have a property right to your data, the choices of others can adversely affect your privacy.

 

He also brings up how owning your data doesn't change things, citing how, when Mark Zuckerberg testified before Congress after the Cambridge Analytica scandal, he was constantly asked who owned the data, and kept saying that the user owned it.

 

Stucke pointed out how Facebook can tell you that you own the data, but to talk with your friends, you have to be on the same network as your friends, and Facebook can easily say to you, "Ok, you might own the data, but to use Facebook you're going to have to give us unparalleled access to it." 

 

What choice do you have?

 

He says that the digital ecosystem has multiple network effects, whereby the big get bigger, and it becomes harder to switch, saying, "If I'm told I own my data, it's still going to be really hard for me to avoid the data-opolies." He says, "to do a search, I'm still going to use Google, because if I go to DuckDuckGo I won't get as good of a result. If I want to see a video, I'm going to go to YouTube. If I want to see photos of the school play, it's likely to be on [Facebook]. So when the inequality in bargaining power is so profound, owning the data doesn't mean much."

 

He explains how these data-opolies make billions in revenue from our data, and how, even if you gave consumers ownership of their data, these powerful firms will still have a strong incentive to continue getting that data. So he identifies another area of concern among policymakers today as "dark patterns," which is basically using behavioral economics for bad: "Companies manipulate behavior in the way they frame choices, setting up all kinds of procedural hurdles that prevent you from getting information on how your data is being used. They can make it very difficult to opt out of certain uses. They make it so that the desired behavior is frictionless and the undesired behavior has a lot of friction. They wear you down."

 

Big tech companies use so many tricks to get what they want and manipulate us -- even moving into manipulating our very thoughts, to make people feel powerless and helpless against them. But we aren't powerless or helpless -- and keeping ourselves informed about Big Tech's new methods and manipulations, and making the effort to learn about and opt for better alternatives (like my man always does), are ways to combat them.

 

We may review some real competitors and alternatives to big tech companies in the future, so keep an eye out for if we do.

 

 

20) Big tech companies commit so many data breaches that they exhaust users into giving up on taking measures to protect their privacy online -- tricking them into believing that they don't have power, when they do. 

 

 

As a study "It wouldn't happen to me": Privacy concerns and perspectives following the Cambridge Analytica scandal summarized, "Recent research has demonstrated that people feel exhausted from hearing about (seemingly) endless data breaches in the news, and as a result feel that attempts to do anything to protect their data are pointless... This phenomenon, known as privacy fatigue, has also been found to occur when privacy controls are complex or too difficult to keep track of, and due to the overwhelming social/psychological strains of using social networking sites." As well, "the varied ways in which people use social media... means that people may frequently see activity they may disagree with or find aggravating, and these "increasing feelings of losing control, both in terms of keeping up to date with privacy settings and from 'unavoidable' exposure to stressful content may therefore cause people to disengage from taking measures to protect their privacy online. This, combined with the 'black box' nature of algorithmic advertising, and people's inability to see or understand how their data is used may further strengthen such feelings."

 

wind-up humanSee the image of the wind-up human to the side, that captures the worn-down feelings and beliefs that big tech companies have created in most people today.

 

The article says that, however, people's privacy-related concerns and their behavior frequently contradict -- a phenomenon known as the privacy paradox, where people will often claim to be concerned about their privacy, only to later disclose personal information for relatively little in return, such as disclosing their income or date of birth for a discount in an online shop, or disclosing their phone number/address to use financial services.

 

It says that numerous researchers have sought to understand the privacy paradox, and, as a result, have offered a range of explanations, including a lack of understanding of risk and knowledge of privacy-protective behaviors, inexperience of first-hand online privacy invasions, and social influences (e.g. sharing data because their friends and family do). However, it says that, despite significant attempts to explain the privacy paradox over recent years, the evidence supporting these accounts remains contradictory and inconclusive.

 

Alternative explanations to the privacy paradox suggest that individuals make privacy decisions by evaluating the potential risks and benefits of disclosing information, with some researchers suggesting that individuals perform a privacy calculus, in which their behavior is determined by the outcome of the privacy trade-off: In other words, if the perceived benefits of sharing data exceed the costs, then an individual will likely disclose information (e.g. sharing personal data in order to reap the benefits of loyalty programs.

 

As well, people are generally reluctant to leave due to social pressures, and the technological affordances it provides (e.g. receiving event updates, and maintaining connections with weak ties).

 

Another consideration that the article describes is that users lack awareness and understanding of how computer algorithms work, and what they can infer from their and/or others' information. This notion is particularly challenging, given that algorithms are generally opaque, to the extent that, in some cases, the developers do not even know how they work. Similarly, this is also complicated by the fact that a user's privacy is also interconnected with that of other people (networked privacy), with an example being how, on Facebook, people can disclose others' information when publishing content, or through interaction with others -- content which can then be re-posted, or shared by others within their networks, who may also continue to propagate that information. When such data is intentional (e.g. tagging someone in a photo, or commenting in a post), people can adjust their settings or take other measures to attempt to protect their privacy, but sometimes users are unaware that they are revealing information about other people, such as an individual's private attributes such as age or location, that can be inferred via others' data, unbeknownst to the individual themselves.

 

The article also says that Communication Privacy Management (CPM) theory may also provide insight towards individuals' concerns and behaviors, suggesting that individuals use the perceived costs and benefits of information disclosure to establish privacy boundaries with those they communicate with. It describes how, up until recently, people may have thought that they had more control over their privacy boundaries, and likely did not consider how privacy can be collectively determined. According to CPM theory, when boundaries are unclear, conflict can result, as people feel their expectations of maintaining their privacy have not been met (i.e. their privacy has been violated). This concept, known as 'boundary turbulence' often occurs unintentionally, and especially in circumstances where privacy boundaries are not fully understood. 

 

--

 

It's clear that, no matter how BIG Big Tech is, it still cracks down on posts that show public OPPOSITION to the leniency shown to big tech companies, and that introduce the idea of truly meaningful steps that people might choose to break free, like stopping using big tech devices altogether.

 

That such a big big tech company chose to control even our tiny post shows that it CARED ABOUT our tiny post -- showing that every tiny post encouraging action to fight Big Tech and its seemingly unstoppable force matters. It proves that, despite Big Tech tricking many into believing that there is nothing they can do, and feeling like giving up, we still have the power to go against Big Tech's plans, and f*ck big tech companies up. We can do so by making the choice to use alternatives like our website -- that have been designed to help free us from the false narrative that we have to rely and depend on big tech resources and products -- and by encouraging others to use alternatives, to allow the people to regain the autonomy and independence from the future that Big Tech and its backers have worked for so many years to impose on us -- and allow us to take our future and freedom back.

 

corporation and puppet personEvery small action adds up, and every one of our "no"s to Big Tech today collectively affect Big Tech as a whole, as Big Tech has proven with its attempt to nip our "no" to its censorship in the bud. While they want us to feel like they are all-controlling, the fact is, they are scared that we will nip the fragile puppet strings that they have worked so hard to build, and it is up to us to cut our fragile ties with them, and slice through their false power once and for all. We are NOT their puppets, like the one featured in the image to the side.

Joker Harley cardWhile it initially threw me off and pissed me off that Big Tech prevented my post and what it meant from being seen, I laugh at the victory that this demonstrates: that our little website scared the Big Bad Big Tech. 

 

We matter more than they want us to believe, so own your strength -- and strength in numbers -- to show Big Tech that it can't push us around, or force us to give up and give in to the agenda that it has convinced many that it has already achieved, when even the smallest of actions from such small sites as ours send them on high alert.


Since we know that it's hard to say no completely to big tech offerings (heck, we're still using a Gem or Junk Facebook page, in the hopes of reaching more people through it with our gem articles and findings), we warn you to at least use them mindfully, and be more aware of what you share and do around such sneaky simulations of helpful devices, that really only help the companies and people behind them.


If you haven't read my special review on voice assistants and smart speakers yet, then definitely check it out, for a lot-more-in-depth look at the issues with smart devices, big tech companies, and more.


If you want to help support our site, then please donate crypto to our Unstoppable Domains page, or sign up for Presearch using our referral link, by clicking on the referral banner below to access it to sign up. Doing so will allow you to make money while you search for things online.

 

presearch

 

Or, you can also share this article, or our first part of our special review on sneaky "smart" technology, or any of our other articles that you find helpful or insightful, with your family, friends, coworkers, and followers.

 

You can also follow us on X and Facebook, for updates and article postings.


See you in my next post. :)