Skip to main content

Jim and Mike on the Potential and Limitations of ChatGPT

Published Apr 7, 2023

Related Learning Paths:

What are the 5 love languages? Who escaped from Alcatraz? When is the next full moon? These are some of the most searched questions on Google in 2023.

Since its introduction in 1998, Google has revolutionized the way we consume information and has allowed people all over the world to access news and information faster than ever before.

Post thumbnail

“Tech companies continue to encourage all of us to act first and think of the implications later, if at all.” — Michael D. Smith

While more and more people log in to ChatGPT to get answers and experiment with the technology, questions around the potential impact on society and privacy have been raised for many data technology experts and advocates.

Leaders in the fields of computer science and data privacy, Michael D. Smith (Professor of Engineering and Applied Sciences) and Jim Waldo (Professor of the Practice of Computer Science), answer questions about ChatGPT and offer their thoughts on the risks and rewards that accompany widespread generative AI technology use.

  1. Harvard Online:

    Have you used ChatGPT?

  2. Michael D. Smith:

    Jim uses it, but I’ve used it only through my students.

    Why? Two reasons. First, I’m less interested in how I might use the tool and much more interested in how those in their thirties, twenties, and teens think about and use it. Second, my students and I are completing a technical paper about language bias in tools like ChatGPT, Google, Wikipedia, and YouTube.

    By language bias, I mean how these tools, and others like them, use the language of your query to present cultural stereotypes tied to the language you use in your query. Despite being trained on the global internet, these tools too often turn us into the proverbial blind person touching a small portion of an elephant, ignorant of the existence of other perspectives.

  3. Jim Waldo:

    As Mike said, I’ve been using ChatGPT recently, but in a somewhat non-standard way—I use it as a second in doing pair programming. This means that when I’m writing some code, I describe what I’m wanting in the user interface and let ChatGPT generate a first version of the code. I may iterate over this multiple times, as the AI isn’t all that great as a programmer, but it does have a wide scope on the appropriate libraries to use for various components. One of the things that I’ve been impressed with is that it will tell me where in the code it generates I should worry that it might have it wrong.

  4. What interests or concerns do you have about the rise of generative AI technologies?

  5. Deepfakes. Soon anyone with a smartphone will be able to create really good ones with very little work. Perhaps in combination with a tool like ChatGPT:

    “Hey ChatGPT, take this video I just took of some idiot doing something dumb, put this other person’s face on it, make the voice sound like this person, and then post it on my social media channel with some witty caption.”

    This is just harmless fun, right? Maybe this post will be promoted by the platform and turn me into an influencer. Not only are tech companies too often focused on what novel things you can do with their latest product, but they continue to encourage all of us to act first and think of the implications later, if at all. Then again, the trend will make our course, Data Privacy and Technology, even more important!

  6. I agree with Mike about the concern over Deepfakes. We are going to have to figure out how to deal with the provenance of information in ways we have not had to do before.

Interested in learning more about trending topics in data privacy from Mike and Jim? Stay tuned to the Harvard Online blog page for their take on ChatGPT and Generative AI, or apply to join the next cohort of their course Data Privacy and Technology.

From the Blog