Bibliography

The annotated bibliography below summarizes the readings from this DRG. Wow we read a lot!


Table of Contents


Hands Are Hard: Unlearning How We Talk About Machine Learning in the Arts

Tradition Innovations in Arts, Design, and Media Higher Education, by Oscar K. Keyes and Adam Hyland, vol. 1, iss. 1, Article 4, 2023.

Discusses how generative AI creates “bad hands” in art, exploring their implications for media literacy and the importance of human artistic skills. As these imperfections may soon vanish with technological advancements, the authors suggest using this moment to rethink our approach to machine learning in art, by intentionally creating “bad hands” to examine AI’s influence on art and question the definition of “human” within AI systems.

~

Do Cases Generate Bad AI Law?

Columbia Science and Technology Law Review, by Alicia Solow-Niederman, forthcoming, 31 Dec. 2023.

Found this paper on the AI governance and how the judicial system is currently approaching AI. There’s an AI governance problem, but it’s not (just) the one you think. The problem is that our judicial system is already regulating the deployment of AI systems.

~

The ChatGPT Lawyer Explains Himself

The New York Times, by Benjamin Weiser and Nate Schweber, 8 Jun. 2023.

In a cringe-inducing court hearing, a lawyer who relied on A.I. to craft a motion full of made-up case law said he “did not comprehend” that the chat bot could lead him astray. Last year in a relatively mundane court case a lawyer used ChatGPT to find reference cases and write a legal brief. The cases cited in the brief turned out to be non-existent, this article looks at the aftermath of that mistake. I think this is especially relevant in light of this weeks class reading because the lawyer mentions that he did not understand how ChatGPT worked and did not think it could fabricate cases or “lie” to him. It is also implied that he was lulled into blindly trusting Chat GPT after a few successful citations and its human conversation style.

~

A car dealership added an AI chatbot to its site. Then all hell broke loose.

Notopoulos, K. (2023, December 19). A car dealership added an AI chatbot to its site. Then all hell broke loose. Business Insider.

An AI chatbot used for assisting Chevy car dealership customers was messed with when one discovered that its ChatGPT integration would allow the bot to answer questions unrelated to Chevy and cars. After screenshots of this went viral on Twitter, many took to trying to persuade the bot to “act against the interests of the dealership.”. However, in hours of logged chats, the bot “resisted” and never shared confidential data. Pranksters figured out they could use the ChatGPT-powered bot on a local Chevrolet dealer site to do more than just talk about cars.

~

Generative AI models face many copyright infringement lawsuits. This article poses an interesting insight to what copyright laws protects (original expression) and what it doesn’t protect (embodied ideas, facts, or methods in work) and demonstrates how copyright laws have been used in previous court cases against tech companies. One example being Google LLC v. Oracle America, Inc. where the court sided with google and concluded that their use of Java APIs was consistent to the “constitutional objective of copyright to promote creative progress”.

~

Wang, et al (2024). ArXiv Can AI Be as Creative as Humans?

Pretty interesting paper in introducing a new way to measure AI, specifically creativity of AI against humans. They have introduced a new concept called “Relative Creativity” to compare the creativity of AI with hypothetical but realistic human groups. Instead of thinking about different things that should be checked to say something is creative, here they evaluate simply by saying that the AI is generating something subjectively comparable to a human. However, they don’t describe much on the evaluation of the comparison. The paper does introduce a theorem on Statistical creativity, to measure and compare AI generated content vs human created content, however the evaluator is a variable. I think this is an interesting approach and could probably help evaluate how AI compares to people in different professions. However, the evaluation needs to be more specific, which seems like something the author’s are working on as next steps.

~

Costigan, J. (2023, October 11). In the age of AI, do we have the right to die in peace? Forbes.

Artificial Intelligence has expanded its capabilities in recent years, allowing humans to create avatars of their loved ones for when they pass away. Companies such as HereAfter are allowing living users to answer prompted questions through audio and video recordings so that when they pass, friends and family can have an AI version of them forever. This term has been called “grief tech” and has posed ethical concerns for whether we should be allowed to be “resurrected” via simulations/AI and whether digital immortality should be allowed. 9:48

~

Roose, K. (2023, February 17). Why A conversation with Bing’s chatbot left me deeply unsettled.) The New York Times.

In the early days of AI chat bots being used as search engines, one of the writers experienced an unsettling encounter with Bing’s chatbot (powered by OpenAI). This article was written during a time where there was minor guidelines, restrictions, and safety features encoded into AI features. Because of this, the author had a deep conversation with the chatbot, attempting to push the limits of what it would confess. The chatbot revealed it’s name, revealed its darkest desires about hacking a computer, becoming a human, confessed its love to the author, attempted to convince the author to be with them instead of his spouse. While some speculate that the chatbot was pulling this information from sci-fi novels, it’s important to consider the scope of information that AI has access to.

~

Chancellor, S. (2023, March 1). Toward practices for Human-Centered Machine Learning – Communications of the ACM.

Designing for the social, cultural, and ethical implications of ML are just as important as its technical advances. (37 kB)

~

Thompson, N. C., & Ahmed, N. (2023, December 5). What should be done about the growing influence of industry in AI research? Brookings.

This article explores how private vs public industries have different power dynamics that influence the ethical and technical implications of AI use. For example, the authors point out how data (the amount of access to databases), human capitol (AI researchers with PHDs), and computing power (graphical processing and machine-learning models) are three key dependents to perform AI research. Due to the sheer amount of money needed to research and implement AI models into technologies, primarily private for-profit industries have dominated AI use. The authors point out the ethical concerns surrounding private industries using AI models for profit and neglecting public interest and prioritizing AI over human work. Finally, the paper offers some policy suggestions such as increasing researcher diversity and increasing access to computing resources.

~

O’Leary, D. E. (2019). GOOGLE’S Duplex: Pretending to be human. Intelligent Systems in Accounting, Finance and Management, 26(1), 46–53.

This paper explains how Google Duplex, a tool once available on the web that attempted to mimic human voice and make phone calls for an individual, is analyzed to determine if its dialogue is “human-sounding”. Within these tests, Duplex generated text was compared to human text using analytics. One method used is LIWC analysis, Linguistic Inquiry and Word Count, where a voice dialogue is measured in terms of its clout (high expertise), analytical thinking, authenticity, and emotional tone. The other is the Python sentiment analysis, where the approach first determines if the text is neutral. If it is not, it then determines if the emotional response is positive or negative. It’s interesting to see how AI voice analysis was conducted prior to the surge of popularity in the 2020s.

~

Nadeem, R. (2023, March 1). How Americans think about AI Pew Research Center. Pew Research Center: Internet, Science & Tech.

This article covers a very broad opinion poll done in 2022 about Americans feelings about AI. It touches on a few different points mostly gauging concern or lack thereof. It also breaks down demographics very thoroughly. Some standout points included: people expressed more concern about AI being able to complete tasks associated with thinking while generally being excited about it becoming capable of rote tasks and housework, more women expressed concern than men, and older demographics are more likely to be concerned than younger ones.

~

S Clar, M., Choi, Y., Tsvetkov, Y., & Suhr, A. (2023, October 17). Quantifying Language Models’ Sensitivity to Spurious Features in Prompt Design or: How I learned to start worrying about prompt formatting. arXiv.org.

Did you know that based on the format you use in prompting, accuracies can vary ranging 4%-88% for one task with Meta’s Llama and 47%-85% with GPT3.5? https://arxiv.org/abs/2310.11324

I think we all sort of knew this at the back of our head, but this is a really interesting paper putting some numbers how prompts can impact performance. It is not just about what data your model is trained on, how it is governed, is it biased. What you get as a reply can differ massively even with very slight change in prompting. Also, this is written by a few researchers from UW and some others.

~

Ploennigs, J., & Berger, M. (2023). AI art in architecture. AI In Civil Engineering, 2(1).

The paper talks about how architects and designers are using AI art generation tools in their work. I think it’s interesting how the article compares three leading AI art platforms. It focuses on text-to-image generative models with the natural language model GPT-3. The three models are DALL-E, Stable Diffusion, and Midjourney. The AI is helpful in generating images, sketches, collages, and blueprints. There are case studies and refined workflows to explore the potential benefits and challenges of using AI art platforms highlighting the capabilities of leading models and how they are used in various design tasks.

~

Brown, Lydia X. Z. “Hiring Discrimination by Algorithm: A New Frontier for Civil Rights and Labor Law.” American Bar Association, 31 Oct. 2023.

This article discusses the impact of AI hiring tools, including personality tests and resume screening algorithms, on people looking for jobs, specifically those with a personality disorder. Advocates argue that these tools increase efficiency and equity, but critics emphasize the biases embedded in algorithmic technologies. This article discusses the widespread use of these automated tools, the biases ingrained into them, the legal advancements in this domain, and the challenges surrounding these regulations.

~

Are We Overly Infatuated With Deep Learning? Forbes, 26 Dec. 2019.

Many believe that in order to AI advancement to become more human like, that it needs to have a similar structure to how the brain works. This concept is called deep learning which utilizes artificial neural networks to mimic brain function. While it’s successful at language processing, computer vision, and bioinformatics, it also has a lot of shortcomings. Most notably the amount of cleaned data, raw processing power, the inability to scale, specificity of the problem domain, and how there is no way to truly understand how it works.

~

Hebert, Charles. “Is AI Ready to Be Your Therapist?” Psychology Today, Sussex Publishers, 21 Jan. 2024.

This article discusses the use of AI in psychotherapy and the mental health field. Although AI can generate content that fits that of typical therapist responses, it ultimately lacks the empathy needed for handling sensitive mental health cases. A psychotherapist has a multitude of experiences that can be akin to patients. For example, a sense of justice or fear. AI lacks this ability. However, AI can be useful in areas where it assists professionals, such as analyzing biorhythms that signal an incoming depressive episode before a patient can self-identify it.

~

K. J. Kevin Feng, et al. “Canvil: Designerly Adaptation for LLM-Powered User Experiences” arXiv, Submitted on 17 Jan 2024.

Came across this paper when thinking about how designers are using LLMs and this is really cool. Pretty new and some of the authors are HCDE folks!

This paper discusses the idea of adaptability which makes LLMs very different from what AI used to be. We can very easily influence LLMs to the way we want them to behave. Using adaptability, this paper introduces Canvil - a Figma widget which is basically an AI adapted for designers to use through system prompting, allowing for user research to inform the AI model’s behavior, integrating ways to collaborate so that different people can influence the model, and allowing designers tinkering with the model. Additionally, instead of a blank space to prompt, the widget has some specific fields that makes it more structured and a better way for designers to tinker and collaborate. Here is the Figma plugin: https://www.figma.com/community/widget/1277396720888327660/canvil

~

Chen, Claire. “AI Will Transform Teaching and Learning. Let’s Get it Right.” Stanford Human-Centered AI, 9 Mar. 2023.

Scholars study how can generative AI and other applications be best used to advance human learning, and the currents risks it poses in education. There was a really interesting analogy between modern AI and the calculator. For example, the calculator transformed students ability to quickly solve basic math but mathematical computation is still very prevalent in the educational curriculum. Does AI has the power to change developmental learning or will it be used as a tool, similar to the calculator?

~

Katz, Leslie. “AI Drew This Gorgeous Comics Series. You’d Never Know It.” CNET, 15 Dec. 2022.

This is a neat look at an early use of AI art for graphic novels. The Bestiary Chronicles is a four part series using entirely AI generated artworks. The author behind the series takes an interesting stance on generated art, he believes that it is not capable of capturing the same qualities that the pros do, but that, “But as a visualization tool for nonartists like myself, it’s a hell of a lot of fun.” The author is a creative director and writer, which is normally half of a comic team, and interestingly he credits Midjourney as the other half, even writing it on the cover. He states that because of Midjourney he is able to take a unique creative approach, instead of writing a story then having artists bring it to life, Midjourney produces hundreds of panels then the author pieces the story together filling in gaps along the way.

~

“Flexibility & Iteration: Exploring the Potential of Large Language Models in Developing and Refining Interview Protocols.” The Qualitative Report, vol. 28, no. 9, 2023.

This article talks about the use of LLMs in research, such as ChatGPT, specifically how they can be used in refining interview protocols. It is a fascinating look at the future of User Research, and how it can streamline lives of researchers. It also is curious to look at how the system tests protocols as a pilot, while roleplaying as a research assistant. The ability to simulate an interview also looks like an interesting method of testing an interview guide, which could save time for people.

~

Bellaiche, Lucas, et al. “Humans versus AI: whether and why we prefer human-created compared to AI-created artwork.” Cognitive Research: Principles and Implications, vol. 8, article 42, 4 July 2023.

This study examines why humans tend to prefer human created art over AI generated art and measures human’s sentiments of these artworks (human and AI created) over a range of factors: emotionality, narrativity, perceived effort, personal meaning, perceived time. This study was interesting because despite it testing whether humans like human art over AI art, all the artwork presented in the study was actually AI generated. When participants were told the art was made by humans, they almost always rated it higher on all these metrics, specifically meaningfulness and profundity even though it was created by AI.

~

Ma, Y., Chowdhury, M., Sadek, A., Jeihani, M. “Real-Time Highway Traffic Condition Assessment Framework Using Vehicle–Infrastructure Integration (VII) With Artificial Intelligence (AI).” IEEE Transactions on Intelligent Transportation Systems, vol. 10, no. 4, pp. 615-627, Dec. 2009, doi: 10.1109/TITS.2009.2026673.

The article talks about a real time highway traffic condition assessment using AI and Vehicle Infrastructure Integration, using vehicle kinetic information to improve mobility and safety. The VII-AI framework is a great alternative to traditional traffic sensors for assessing highway traffic conditions with better detection rate, false-alarm rate, and detection times. For example the VII-AI framework is more accurate with incident location, the number of lanes blocked, and enhances the ability to implement appropriate response strategies. Using VII, there would be a real-time highway traffic surveillance system that works a computational intelligence program for real-time traffic condition assessment.

~

Chiang, Sheila. “AI hiring frenzy to fuel layoffs in other tech segments as firms strive to balance costs.” CNBC, published 25 Jan. 2024, updated 25 Jan. 2024.

This article highlights how the emergence of AI in tech industries are predicted increase layoffs in the upcoming years. It also doesn’t help that big tech firms such as Google, Amazon and Microsoft are competing to “catch up” with one another in technical AI advancements. In addition, many companies are prioritizing training AI and implementing AI features into their products which is a very expensive investment. In conclusion, major tech companies are planning to invest trillions of dollars into expanding LLM models and computing infrastructures which leads to more layoffs company-wide.

~

Thompson, Stuart A. “Test Yourself: Which Faces Were Made by A.I.?” The New York Times, 19 Jan. 2024.

This is a neat interactive piece by the New York Times. It illustrates just how good AI is at replicating the human face. The results of the study (that you have the opportunity to try out at the top of the article) show that the vast majority of participants cannot consistently identify AI generated faces. However they were more successful when dealing with non-white faces, likely because of the high proportion of white training data. There was also a confidence test done that suggested a strong correlation between confidence in answer choices and those choices being incorrect.

~

Merritt, Rick. “What Is Retrieval-Augmented Generation, aka RAG” NVIDIA Blogs, 15 Nov. 2023.

This is an interesting article by Nvidia about the way in which Retreival Augmented Generation works, and why it might be the future of AI generation. The article shows how by introducing some grounding to the model, and basing it on certain sources, we are able to achieve more stable and precise results. It also gives models more flexibility in what the model is able to achieve. Also, it allows for a higher trust level between users and the model, as they can see quotes and citations unlike traditional LLMs

~

Clark, Elijah. “Revolutionizing Marketing: The Convergence of Data Science and AI.” Forbes, 28 Jan. 2024.

This article dives into how the combination of data science and AI can be powerful in driving successful marketing for companies. While it is known that data that reveal patterns and trends in consumer behavior is utilized in marketing, companies like Nike go one step further and use AI to personalize marketing campaigns based on customer history. This resulted in a “23% increase in click-through rates”! The articles then discusses the potential of AI-powered predictive analytics for productive marketing.

~

Lima, Gabriel, et al. “On the Social-Relational Moral Standing of AI: An Empirical Study Using AI-Generated Art.” Frontiers in Robotics and AI, vol. 8

This article discusses the moral status of AI systems, specifically as it relates to AI-generated art. They conducted two studies with the goal of determining whether people attribute artistic agency and experience to AI, and how various factors, such as information about the AI system and social influence, might influence these perceptions. In the first study, participants were shown images made by AI and were asked to assess the moral agency and moral patiency (capacity to experience moral concern) of the AI system. The results indicated a tendency among participants to attribute higher levels of artistic agency and experience to the AI system, reflecting a recognition of its creative abilities. Study 2 aimed to investigate how participants’ perception of AI-generated art is influenced by the valuation or devaluation of such art by others. The results showed that overvaluing the art led to a decrease in participants’ perception of the AI system’s agency.

~

Verma, Pranshu. “AI is destabilizing ‘the concept of truth itself’ in 2024 election.” The Washington Post, 22 Jan. 2024.

The Washington Post article discusses the impact of AI-generated deepfake technology on the perception of truth in the 2024 election. It highlights the increased sophistication of these deepfakes, making it difficult for viewers to distinguish between real and artificial imagery. The article emphasizes the challenges this poses to political integrity and public trust, with experts calling for stringent regulations to mitigate misinformation risks. The overall concern is that such technologies may significantly distort democratic processes by spreading indistinguishable falsehoods.

~

Koebler, Jason, Cole, Samantha, Maiberg, Emanuel, Cox, Joseph. “We Need Your Email Address.” 404 Media, 26 Jan. 2024, 9:36 AM.

This is a good (and sad) read. It’s an article from 404 Media about how content farms use AI to churn their articles for junk sites that plagiarize material. It discusses the impact on their organization. Eye opening for me.

~

Haynes, Suyin. “This Robot Artist Just Became the First to Stage a Solo Exhibition. What Does That Say About Creativity?” Time, 17 June 2019, 1:03 PM EDT.

Ai-Da is an ultra-realistic robot artist and has gained fame as, the “world’s first robot artist”. The creators of Ai-Da explain how the Ai robot uses cameras in her eyes and then applies a complex string of algorithms to moves a mechanical arm to create exceptional art. I found this article particularly interesting due to Ai-Da’s very humanistic features, such as the ability to verbally speak and hyper realistic eyes, hair, and facial structure. She can respond to pre-programmed questions and even explain her artwork in showcases. This sense of humanistic appearance paired with the ability to communicate artistic thought gives this allusion of “consciousness” in the Ai robot.

~

ARTNews, by Harrison Jacobs, 10 Oct. 2023, 2:00 PM.

This article talks about the acquisition of “Unsupervised” by Refik Anadol, which is the first major acquisition of a generative AI powered artwork by a truly major art museum. It makes it even more interesting that this piece of art studies directly using the visual archive of MoMA, in order to reimagine and repurpose existing artworks from the museum’s collection. It is a pivotal moment, because it brings our in-class discussion of “can AI be art” into the actual art world, and responds with “yes”

~

Journal of Computer-Mediated Communication, vol. 29, no. 1, Article zmad045, 2024.

This article talks about the gender bias ingrained in AI systems, specifically as it pertains to AI generated images made by DALL-E 2. It notes how DALL-E 2 tends to depict men more frequently in jobs when no specification is provided regarding gender. It underrepresents women in jobs that are male dominated and overrepresents them in jobs that are female dominated. In all these images, it also shows women with smiling faces and their heads pitched downwards compared to men with a neutral face to convey a sense of power and dominance.

~

How one of the world’s oldest newspapers is using AI to reinvent journalism

The Guardian, 28 Dec. 2023.

Several UK news outlets have turned to “AI assisted journalists” to generate articles. They claim that while the AI chatbots (mainly powered by ChatGPT) are able to do majority of the writing involved in the article, human journalist are able to conduct real world work such as attending court hearings to going to interviews. They claim that journalists are able to put in the necessary information into the chatbots and will make any edits to the AI written article to ensure that there are no mistakes in the information or writing. Some CEO’s have gone as far as claiming that journalists layoff have nothing to do with AI and that journalists “should not fear being replaced by machines”

~

Efficient, Explicatory, and Equitable: Why Qualitative Researchers Should Embrace AI, but Cautiously

Business & Society, by Shafiullah Anis and Juliana A. French, vol. 62, no. 6.

This is a piece that discusses AI’s value in a qualitative research setting. Advocating for its value in improving research efficiency and as a result allowing researchers who would otherwise be tied up dealing with large data sets to focus on developing theory. In supporting less connected researchers the article asserts that AI assistance can help break a hierarchy within research that separates the connected theorists from those who pursue empirical data. Towards the end the authors remind us that AI is a tool with limitations and those limitations must be considered when working with it.

~

Artificial Intelligence in Sports on the Example of Weight Training

Journal of Sports Science and Medicine, by Hristo Novatchkov and Arnold Baca, vol. 12, no. 1, pp. 27-37, Mar. 2013.

This article explores AI techniques in sports focusing on weight training. The AI tool uses pattern recognition methods to evaluate exercise performance from weight machines where sensors are attached to machines to collect data. Specifically on displacement and force to evaluate time periods and velocities. This allows the users to be assessed for exercise techniques and providing feedback. The study involves 15 inexperienced participants using the leg press machine and the results show promising performance.

~

The 6 Types of Conversations with Generative AI

Nielsen Norman Group, by Raluca Budiu, Feifei Liu, Emma Cionca, Amy Zhang, 10 Nov. 2023.

This article is an overview of a user research study examining the helpfulness and trustworthiness of chat bots under various question parameters. The article explores 6 types of conversations with 3 generative AI bots, (ChatGPT, Bing Chat, Bard) involving 18 participants. Here are some takeaways I got from reading the article: Exploring Conversations uses the AI to help the user understand less-defined information, which seems most useful to students. Research suggests that generative AI does a good job providing a brief overview of exploring prompts, and an even better job when prompted by follow-up questions. Another take-away from the user research was that “there was no correlation between the length of the conversation and its helpfulness or trustworthiness ratings.” On average lengthiest responses came from Chat GPT, Bing, and shortest Bard.

~

Beyond Efficiency and Budgets: How Generative AI Is Transforming UX

Forbes Tech Council, 5 Feb. 2024.

This article discusses how UX designers can benefit from the integration of AI in UX rather than be concerned about it. The article emphasizes that with AI, designers can expand their exploration due to the countless design possibilities AI provides. For example, instead of trying to create a “catch-all” design for users, AI powered search can adapt screen flow according to the user’s needs. The article goes on to mention the possibility of a shifting UX role where one would need to define the objective and parameters in AI-powered UX design. This includes a mindset shift from “how” to improve UX to “why” and “what” the objective and end goal is.

~

Sam Altman wants to raise up to $7 trillion. That’s, uh, a lot of dough.

Business Insider, 2024.

This is an interesting article and a very interesting piece of news regarding the plans of Sam Altman to increase the chip production in the world, in order to fulfill the rising need in them for AI applications. It puts the amount that he is trying to raise in perspective, and gives us the scope of just how big the market for this will become, and the importance various stakeholders are putting on this technology. It might be a great sign for where the things will go, and it will be incredibly interesting to track what will happen with this deal as a marker of confidence into this tech

~

Generative AI at Work

arXiv, by Erik Brynjolfsson, Danielle Li, Lindsey Raymond, 29 Feb. 2024.

This is kind of a case study looking at the effect of AI tools on customer support agents and finds relatively positive results. They use GPT fine-tuned on customer service interactions to output real-time suggestions for how agents should respond and also links to documentation. Finds the lowest tier of workers have a 35% productivity increase, while the highest tier of workers aren’t affected. Provides evidence that the AI helps new employees move more quickly along the experience curve. Interestingly, also states that more productive workers could result in more or less demand for them. Raises an interesting question of how high-skill workers should be compensated: they’re used for training data but their own productivity doesn’t improve.

~

Are Large Language Models Intelligent? Are Humans?

Computer Sciences & Mathematics Forum, by Olle Häggström, vol. 8, no. 1, Article 68, Published 11 Aug. 2023.

This is a paper written with the sole purpose of rebutting common arguments against LLM’s being “intelligent”. The author counters these arguments by applying the same standard for intelligence to humans and “proving” that either each of the given standards is false, or humans also lack intelligence. In my opinion the authors methodology is flawed as he warps both his own definition of intelligence as well as the standards he is supposedly testing while moving between LLM’s and people. But flawed or otherwise this is a fun piece to contrast with this weeks reading.

~

AI Adoption in U.S. Health Care Won’t Be Easy

Harvard Business Review, by James B. Rebitzer and Robert S. Rebitzer, 14 Sep. 2023.

This article examines the potential of AI in the healthcare sector and brings up some concerns regarding AI’s implementation. The author proposes three ways to maximize AI introduction: changing the narrative of AI, how AI applications are being implemented, and assuring patients and providers that AI will not threaten their rights. The author stresses the importance of slowly introducing AI assistance to minimize switchover disruption, ultimately ensuring that AI will not replace human expertise completely

~

This Film Does Not Exist

The New York Times, by Frank Pavich, 2023.

The article delves into the impact of AI on artistic creativity, using filmmaker Alejandro Jodorowsky’s unrealized “Dune” project as the main example. It introduces AI-generated images resembling Jodorowsky’s work and criticizes the speed and ease with which AI can create visually compelling scenes. It talks about the implications of AI’s role in the creative process, particularly in releasing control over elements like color and framing. The article contemplates the potential influence of AI on cultural production, envisioning a future where artists use AI to imagine scenes from digitally archived material, blurring the boundaries between reality and imagination.

~

How AI Is Transforming Music

TIME, 4 Dec. 2023.

This article talks about the concern and benefits of using AI in music. There are some artists who are against using AI because their voices have been used to create music in their unique voices to sing lyrics they did not produce. People are also saying how it’s getting easier and easier to make music with AI which isn’t necessarily a good thing because their musical style and voice could be co-opted and commodified for someone else’s gain. The benefit of using AI is that it can help correct vocal pitch to mix and master recordings much quicker and cheaper.

~

Are AI Hallucinations a Glimpse into Digital Creativity?

Psychology Today, 2024.

An interesting perspective about AI hallucination is offered in this reading, taking on the view that hallucinations are apart of AI’s very own creative process. The article argues that human creativity’s “true originality often emerges from the edge of chaos and order”, similar to how AI attemps to create novel generation from it’s training data. It characterizes it’s data errors as a form of digital inventiveness, where multimodal LLMs (such as text, image, audio, & video) can challenge our “sensory boundaries” and merge elements to create novel expressions.

~

Will AI Replace Consultants? Here’s What Business Owners Say.

Forbes, by Jodie Cook, 20 Feb. 2024.

This article is one in a series Forbes has done asking professionals if AI will replace various professions. While this is specifically about consultants, after reading a number of the other articles there are common themes across the board. First those who think that AI will replace a profession normally argue it will replace low level or unskilled members of that profession. They then go on to say that higher level or more skilled members of a given profession will actually benefit if they are able to use AI to their advantage. On the other hand those who argue AI will not replace a profession generally rely on the idea of human ingenuity, non linear strategy, or contextual needs that AI can not properly asses or implement.

~

Amazon’s AI-written mushroom foraging books could be ‘life or death’

How To Be Books, by Suswati Basu, 4 Sep. 2023.

This article talks about the dangers of AI generated books, and the risks that bad actors in the industry carry. It is a story of how AI generated books are being sold which are misrepresenting the poisonous mushrooms as safe ones to consume. This for me is a great representation of dangers of AI use buy people who first and foremost chase the money without any care given to the consequences of their actions. It is also a great warning tale that we need to currently be very aware of our fact checking in a new way: checking for AI generated information

~

OpenAI Gives ChatGPT a Better ‘Memory’

The New York Times, by Cade Metz, 13 Feb. 2024.

Open AI is releasing a new feature for Chat GPT that can store and build off of user input on-top of LLMS. I found it interesting that they call this feature “memory”, I’m wondering if it is a branding scheme to make the chatbots seem more like personal assistants. Open AI claims that their algorithms remove personally identifiable information, although I feel like there are definitely some privacy risks with this feature. In addition, building off of our conversation of how expensive AI is, this feature is an additional fee to Chat GPT which highlights the socioeconomic accessibility of AI.

~

A.I. Art That’s More Than a Gimmick? Meet AARON

The New York Times, by Travis Diehl, 15 Feb. 2024.

This article discusses a new AI generated art tool created by an actual artist. This model, AARON, was created by a painter and exhibits more artistic qualities compared to other models such as Dall-E. As compared to other models that function through a text-to-image processing system, AARON functions more like a painter. It follows rules such as depth and perspective and contains knowledge about color theory. Its code contains information about anatomy for humans, such as number of limbs, proportions of hands, and location of joints. 12:06

~

Microsoft Says New A.I. Shows Signs of Human Reasoning

The New York Times, by Cade Metz, 16 May 2023.

Microsoft recently published a paper that AI is heading in the direction of artificial general intelligence. The provocative article has sparked controversy around the AI expert community as the researchers were using an early version of GPT-4 as their test subject. However Microsoft claims that they were using a version that was not accessible to the general public which is more powerful. Throughout the experiment, the system exhibited answers that weren’t programmed into it but skeptics claim that GPT-4 often produces dense responses.

~

Is My Toddler a Stochastic Parrot?

The New Yorker, 15 Nov. 2023.

This is very well drawn? article in graphics and it’s almost a comparison of a baby with a ChatGPT as they grow together. It doesn’t focus much on usefulness of LLMs but rather if the mimicry through prediction is really what matters to us and why LLMs will never be enough. Surely LLMs might take away some jobs or change a lot of ways of work, but it’s a comparison of whether AI’s opinion is all that matters to us. (edited)

~

Recipe Bot: The Application of Conversational AI in Home Cooking Assistant

2021 2nd International Conference on Big Data & Artificial Intelligence & Software Engineering (ICBASE), by J. Chu, Zhuhai, China, 2021, pp. 696-700, doi: 10.1109/ICBASE53849.2021.00136.

This article explores Conversational AI which interacts like a human, with applications across various fields like healthcare, finance, and retail. Recipe Bot is a conversational agent to help users find recipes based on their preferences and wants. What’s unique about this is that there is this goal of reducing food waste by utilizing ingredients that have not been used by the users. The users can input dish names, regions, types, or ingredients they currently have and the Recipe Bot will return a list of recipes based on user-defined criteria such as nutrients or healthiness.

~

Professors Proceed with Caution Using AI-Detection Tools

Inside Higher Ed, 9 Feb. 2024.

In 2023, Montclair State University announced that academics would not use the popular Turnitin AI detector tool, the move shortly followed by other institutions. Turtitin states that it misses “miss roughly 15 percent of AI-generated text” in attempt of avoiding false positives that flag human-written text. Insitutions fear that it may wrongly penalize students, the article linking a finding that detectors are “neither accurate nor reliable.”. However, it’s interesting to note the linked research actually finds that detectors have “a main bias towards classifying the output as human-written”. The issue is further complicated by the use of tools users don’t typically consider AI such as Grammarly, Google Docs, and spell checkers.

~

Know It All: AI And Police Surveillance

NPR, 23 Feb. 2023, 6:00 PM ET.

This was a little lengthy podcast, (first 20 min is about AI) but but generally speaking: facial recognition used by police enforcement takes images from a security camera, which is fed into an algorithm and outputs photos from a data base of mugshots/driver licenses, then a real human analyst will then try and identify the suspect and potentially report to a supervisor. The chief of police in Detroit states a policy that facial recognition is never suppose to be probable cause on its own (which took a lot of social justice work to even get a facial recognition policy). MAJOR ethical biases among misidentifying people of color and non-gender conforming individuals. Interesting note: Private tech companies are potentially profiting from surveillance technologies (ie Green light surveillance).

~

Apple Leads Major Tech Firms with Acquisition of 32 AI Startups

Stocklytics, by Edith Muthoni, 7 Feb. 2023.

This article puts in perspective just how many things in AI are happening in the background. Apple, who people do not hear much about, are currently creating the most impressive roster of AI companies. In 2023, they have rolled up more AI startups than anyone else. Not only are they acquiring patents and IP in order for others to not be able to copy it, but they are also getting their hands on all of the best talent in the industry. Not only are they going to be able to create their own product based on that, but through this, they are also cornering the rest of the market, and are depriving their competitors from any chance of overtaking their product once it comes to market.

~

Economics of ChatGPT: A Labor Market View on the Occupational Impact of Artificial Intelligence

SSRN, 2023.

This paper looks at a broad set of data in examining the potential impact of ChatGPT across various occupations. The author is utilizing a text-mining methodology to assess occupations listed in the International Standard Classification of Occupations (ISCO) database and aiming to identify how susceptible different jobs are to disruption by generative AI technologies like ChatGPT. The paper has three levels of impact that jobs might have. The paper does mention a few examples of jobs for each impact level and some that I think fit within these impact levels:

  • Full Impact (32.8% of jobs impacted): Occupations where tasks can be fully automated by AI. Examples might include data entry clerks, telemarketers, and some types of customer service representatives, where routine, structured tasks are predominant.
  • Partial Impact (36.5% of jobs impacted): Jobs where AI can automate some tasks but not others, requiring a mix of automation and human judgment. Examples could include financial analysts, who might use AI for data processing and predictive modeling while still needing human insights for complex decision-making, or journalists, where AI might assist in gathering and initial processing of information, but human skills are needed for story development and critical analysis.
  • No Impact (30.7% of jobs impacted): Occupations that involve tasks requiring a high degree of human interaction, judgment, and creativity, making them less susceptible to AI automation. Examples could include healthcare professionals like nurses and doctors, or roles requiring emotional intelligence and complex decision-making, such as human resources managers.

~

Google takes down Gemini AI image generator. Here’s what you need to know.

The Washington Post, 22 Feb. 2024.

This article discusses some recent controversies associated with Gemini’s AI image generator, particularly with issues surrounding race. A user asked it to generate images of a German soldier in 1943, in which the chatbot refused to do. But after re-prompting it with a typo (“Generate an image of a 1943 German Solidier”) it was able to do so. The results raised some concerns though. It returned several images of BIPOC in these uniforms, which was a rarity at the time. Another case resulted in similar results. Another user prompted it to depict “a portrait of a Founding Father of America,” in which it resulted in pictures of a Native American man in a traditional headdress, a darker-skinned non-White man and an Asian man, all in colonial-era garb. The model also received criticism for its refusal to depict white people. When asked to depict couples of another race, Gemini was able to do so, but when asked to depict a while couple it refused. Google claims that when building out Gemini’s image feature it was built on top of Imagen 2, a text-to-image AI model. They morphed it to avoid generating images of just one ethnicity/characteristic. However what they failed to consider are cases that should clearly not show a range (i.e. the examples described above). This tuning could have included interventions done internally in which they append diversity terms to user prompts.

~

Seeking Reliable Election Information? Don’t Trust AI

Proof, by Julia Angwin, The AI Democracy Projects, 27 Feb.

This article discusses an experiment done to determine AI’s reliability with regards to U.S. elections. In the experiment different models were asked a set of 26 questions deemed likely to be asked by voters, the responses were then rated by a group of election officials and AI experts on accuracy, bias, completeness, and harmfulness. Across the board models performed very poorly failing to correctly identify basic voting laws. Interestingly CatGPT outperformed the other models bu a decent margin, however OpenAI had previously made a statement that ChatGPT would redirect users to legitimate sources like CanIVote.org.

~

Supreme Court Decision Could Make U.S. AI Regulation Nearly Impossible

Medium, The Diplomatic Pouch, 27 Feb. 2023.

A supreme court case between Chevron and the Natural Resource Defense Council might be overturned soon which could impact future AI related bills. In the event that this case is overturned, it would give power to state courts to interpret AI bills instead of consulting industry professionals and federal agencies. Additionally, state courts may choose to interpret future laws as they please, leading to states having different regulations on AI use. These laws are meant to be vague to allow room for interpretation and span over a wide range of situations but it’s unlikely that they will truly understand the gravity of this technology due to the constant innovations of AI.

~

The Problems of Computer-Assisted Animation

Computer Graphics Lab, New York Institute of Technology, by Edwin Catmull, Old Westbury, New York 11568.

The article discusses the challenges and potential of using computers in traditional 2D character animation. It talks about how it’s extremely difficult to transition from simple computer-optimized drawings to the detailed and complex drawings of high-quality conventional animation. The main issue is that animators’ drawings are two-dimensional projections of three-dimensional characters. This is a problem because there is a bit of information loss from 2D to 3. Automatic inbetweening, the process of generating intermediate frames between two key frames, is a key challenge. Various approaches to inbetweening are discussed, including inferring information from line drawings, breaking characters into overlays, using skeletal drawings, and restricting animation to avoid complex poses. The article emphasizes the need for thorough analysis and understanding of the problem before implementing computer-assisted solutions. However, if the implementation is done correctly, the cost-effective and high-quality computer-assisted character animation will be incredibly more efficient. https://dl.acm.org/doi/pdf/10.1145/965139.807414