PSY 355 Psychology & Media
in the Digital Age
This page was last modified on March 27 2026 |
Artificial Intelligence and the Future of Human Affairs: Part 3
AI: Effects on Employment and the Job Market
Global employment change by 2030 (World Economic Forum 2025 Report)
Across the glove in the next five years, 170 million jobs are projected to be created and 92 million jobs to be displaced, constituting a structural labor market churn of 22% of the 1.2 billion formal jobs in the data set being studied. This amounts to a net employment increase of 7%, or 78 million jobs.
What areas of employment are most likely to be affected by AI?
- Office and Administration: Data entry clerks, receptionists, proofreaders, and accounting assistants are highly vulnerable as AI automates document processing and scheduling.
- Customer Service & Sales: Chatbots and virtual assistants are replacing human staff for routine inquiries, telemarketing, and customer service.
- Media and Content Creation: Technical writers, copywriters, copy editors, and social media analysts are being displaced by generative AI that can produce content rapidly.
- Tech and Finance: Entry-level coders, software engineers, and financial analysts are seeing decreased demand as AI writes code and analyzes data more efficiently.
What areas of employment are least likely to be affected by AI?
- Transportation and Manufacturing: Autonomous vehicles and advanced robotics are gradually replacing drivers and warehouse/manufacturing workers
Check your own future career choice at TripleTen
- Healthcare Professionals: Doctors, nurses, and therapists who need empathy and complex judgment.
- Skilled Trades: Electricians, plumbers, and technicians.
- Creative Strategy and Leadership: Creative directors, managers, and strategic planners.
Does this information and these predictions have any relevance or impact on what you see yourself doing 5 to 10 years from now?
AI: National Security
The Defense Department versus Anthropic (AI company producing Claude)
Yesterday, March 26, 2026, a federal judge granted Anthropic a preliminary injunction against the Trump administration's and Defense Department's move to blacklist the AI firm, following a dispute over AI usage in autonomous weapons and surveillance. The Pentagon labeled Anthropic a “supply chain risk” after the company refused to remove safety guardrails on its Claude model for military use.
Key Aspects of the Conflict (YouTube Video: CNN February 2026)
- The Dispute: The Pentagon demanded unrestricted use of Anthropic’s AI for "all lawful purposes," including potential autonomous weapons.
- Anthropic's Position: Anthropic refused to allow its technology to be used for mass surveillance or fully autonomous lethal weapons, citing safety and ethical concerns.
- "Supply Chain Risk" Label: The Department of Defense—often called the "Department of War" by the administration—designated Anthropic a national security risk, a move usually reserved for foreign adversaries, which prohibited government contractors from using its technology.
The conflict highlighted a major clash between AI developers aiming for ethical safeguards and the government's demand for unconstrained military technology
- Legal Outcome (Preliminary): Judge Lin ruled in favor of Anthropic, stating that the government likely misused its power to punish the company for its safety stance.
Should AI developers have the right to demand their products be used within ethical guidelines established by those developers?
AI: Intellectual Property
The major dispute in AI and intellectual property (IP) centers on whether using copyrighted works to train generative AI models constitutes "fair use" or copyright infringement. Rights holders argue that AI companies steal content to create competing products, while developers claim training is transformative learning.AI and (Deep)Fake Videos
Key Aspects of the Dispute:
- Training Data (Input): Lawsuits like The New York Times v. OpenAI and Getty Images v. Stability AI allege that AI companies illegally scraped massive amounts of creative work to train models without consent or compensation.
- AI-Generated Outputs: Concerns exist that AI outputs can produce content "substantially similar" to copyrighted material, violating the rights of original creators.
- Ownership of AI Works: Courts have consistently ruled that only human-created works are eligible for copyright protection, rejecting attempts to grant ownership to AI, as seen in Thaler v. Perlmutter.
This conflict has created significant litigation regarding fair use, with some courts finding training transformative but expressing concerns about the market impact on human creators.
- Fair Use Defense: AI developers argue that using data for training is a transformative, permissible use, akin to human learning.
Should AI developers have the right to use the intellectual property of others to train their models without having to get consent from the owners or to pay/compensate them for using their property?
The dangers of Artificial Intelligence (AI) and fake videos—commonly known as deepfakes—stem from their ability to create highly realistic but fabricated audio and video, making it increasingly difficult to distinguish truth from falsehood. These technologies facilitate widespread misinformation, scams, fraud, reputational damage, and the creation of non-consensual content.
The key dangers of AI and fake videos include:
- Political Manipulation and Disinformation: Deepfakes can be used to engineer fake scandals, create false speeches by political leaders, or manipulate voters, potentially altering election outcomes and eroding trust in democratic processes.
- Financial Fraud and Business Risks: Scammers use AI to impersonate company executives in video or voice calls to convince employees to transfer funds, with incidents already leading to massive financial losses. A single fake image or video can also be used to influence stock prices.
- Reputational Destruction and Harassment: Deepfakes can be used to create non-consensual pornographic videos or fake videos of individuals committing crimes, destroying reputations, and causing severe personal distress.
- Consider this scenario: Someone has taken videos of you in different situations and has recorded your voice. They then use AI to create a fake video of you doing or saying something that would harm your reputation, for example, you telling racist or homophobic or antisemitic jokes. What can you do in such a situation?
- The "Liar’s Dividend": As fake content becomes more common, it becomes easier for people to dismiss real, authentic videos of misconduct as "AI-generated" or fake, allowing wrongdoers to evade accountability.
- Social Engineering and Phishing: AI can create tailored, hyper-realistic content for scams, making it harder for individuals to identify phishing attempts or fraudulent requests.
- Erosion of Truth and Social Distrust: The proliferation of fake content creates a general sense of paranoia where people may stop believing in legitimate information, causing societal instability
- Have you experienced any examples of deep fake images/videos?
- Do you find yourself questioning whether or not something you have seen on a screen is real?
- Have you ever wondered whether you have dismissed something real as unreal?
March 25, 2026: Ruling against Meta and YouTube
Two days ago, a California jury found Meta and YouTube liable for $6 million in damages for designing platforms with addictive features that caused mental health issues in a young user, marking a landmark case. The suit centered on allegations that the platforms were deliberately engineered for maximum engagement, causing anxiety and depression.
Key takeaways from the ruling against Meta and YouTube: CBS News
Primary Points in the Suit
- Negligent Design: The platforms (Instagram, Facebook, and YouTube) were designed to be deliberately addictive for youth, employing features like infinite scroll and algorithmic recommendations.
- Mental Health Harm: The lawsuit argued that this design directly caused significant mental health injuries, including anxiety and depression, in the plaintiff.
- Failure to Warn: The platforms failed to properly warn users about the risks of addiction and mental harm.
- Targeting Children: Internal documents showed the companies, particularly Meta, targeted young users (tweens) despite knowing the potential for harm.
- Unconscionable Tactics: The suit alleged Meta engaged in unconscionable trade practices by exploiting the inexperience and vulnerabilities of children.
The companies were found negligent and liable, with the ruling emphasizing that they knew their design was dangerous but failed to change it.
- Prioritizing Retention over Safety: Evidence indicated companies prioritized user engagement and long-term retention over the safety of minor users.
- What reaction or reactions do you have to this ruling?
- What did you think of the one speaker in the news story who said, "Stop blaming the parents"?
- Do you think it may be likely that access to social media and other online platforms will eventually be legally restricted to those 16 or 18 years of age or older? NOTE: Apple in the UK is now requiring its phone users to prove that they are at least 18 years old