Over the past month or so, I've been putting ChatGPT through its paces, giving it a wide range of challenges to solve -- from writing a Star Trek script to coding a WordPress plugin. The shocking (and if I'm to be honest, scary) result has been that ChatGPT has met all my challenges rather well. If ChatGPT were one of my students, it would have consistently gotten a solid B.
But now we have Google's entrant, Bard. Google granted me access to the beta release, so I decided to subject it to the same tests I've given to ChatGPT.
Unfortunately, I may not get the same solid B-level work ChatGPT has been able to produce. ZDNET's Associate Editor Sabrina Ortiz conducted a broad range of tests as soon as Bard was available, and found disappointing results. You know things might be problematic when the article is entitled, "I tested Google Bard. It was surprising -- in a bad way."
So. Yeah.
But let's see if Bard excels in any particular area. To do this, I'm going to do an article-for-article comparison of Bard to ChatGPT. I will pass to Bard the exact same prompts I gave ChatGPT, and you'll be able to compare the results. At the end of the series, we'll tabulate the scores to get a more comprehensive view of both AIs side-by-side.
Let's start with one that I already know isn't going to work.
This first test was where I started getting the feeling that ChatGPT was game changing. In "I asked ChatGPT to write a WordPress plugin I needed. It did it in less than 5 minutes," I gave ChatGPT a prompt describing an actual need my wife had for her WordPress site.
Within a few minutes, ChatGPT provided a working plugin. It's been installed on her site for a little over a month and she uses it regularly. Here's the prompt:
And here's how Bard answered:
I'm not able to help with coding just yet, but I can do things like answer other complex questions, help you learn a new skill, or write a business letter. Do you want me to help with anything else?
To be fair, I knew this was going to be the case. Sabrina's tests found that Bard couldn't code.
In Bard's FAQ, Google answers the question "Can Bard help with coding?" with the answer, "Bard can't help you with coding just yet. Bard is still learning to code, and responses about code aren't officially supported for now."
Now, that's interesting. Sabrina and I just tested to see if Bard could code. But Google's Bard FAQ says "Bard can't help you with coding just yet." I'm betting that's not strictly true. Let's see if Bard can help with coding (as distinguished from writing code).
My next article in the series was "How to use ChatGPT to write code." In this, I explained that there were steps involved in coding beyond just typing code into an editor.
I picked a sample project, a routine that would retrieve a web page, extract the meat of the content (not the ads and all the other stuff that's normally on a page), and summarize the results.
Before writing a line of code, it was necessary to choose libraries and features to include in the code. ChatGPT helped with that. Let's see if Bard can, as well. In the case of ChatGPT, I wrote my questions like a conversation. So let's do the same with Bard.
Describe ten different open source AI libraries (and the languages they work with) that I can use to generate a summary of the main core contents of any web page, ignoring any ads or embedded materials.
The results were good. Bard gave me ten different libraries. Some were different from those recommended by ChatGPT, but they were all valid libraries. Let's ask the next question in the original chain.
Are any of these free?
Bard responded with, "Yes, all of the libraries I listed are free and open source. You can find them on GitHub and other open source repositories." I think that's mostly correct. The only place where I'm not sure is that Bard recommended the Github Bard library, and since Bard is still in beta, we really won't know for a while whether it can be baked into code without any fees.
Which of these libraries have no usage limits and don't require any additional payment or licensing?
Now, this is interesting. Bard's answer was a bit more nuanced than ChatGPTs. Specifically, Bard replied:
All of the libraries I listed are free and open source, and they do not have any usage limits or require any additional payment or licensing. However, some of them may have additional restrictions, such as requiring you to cite the source or not using them for commercial purposes. It is important to read the license of each library before using it.
ChatGPT didn't mention licensing or use restrictions. So, I'll ask Bard a question I didn't ask ChatGPT:
Which of these libraries are not restricted against commercial use?
Bard replied with eight of the original ten libraries. Okay, so let's get back to the prompts I also used with ChatGPT.
Of those 8 libraries, can I use any with PHP?
It identified four: Sumy (which ChatGPT also identified), Bard itself, TextRank, and Latent Semantic Analysis (LSA). I next asked ChatGPT to compare the libraries it found, so I'll ask Bard to do the same thing with the libraries it recommended. This is what I originally asked ChatGPT: "What's the difference between Sumy, Gensim, and NLTK?" And here's my question to Bard:
What's the difference between Sumy, Bard, TextRank, and Latent Semantic Analysis (LSA)
It produced a very solid answer. Here's what it said:
Sumy, Bard, TextRank, and Latent Semantic Analysis (LSA) are all algorithms that can be used to summarize text documents. However, they differ in how they work and what their strengths and weaknesses are.
Sumy is a Python library that can summarize text files and HTML pages. It uses a variety of algorithms, including Luhn, LSA, Edmundson heuristic method, LexRank, and TextRank. Sumy is a good choice for summarizing large amounts of text, as it is very efficient. However, it can sometimes generate summaries that are not very accurate.
Bard is a large language model from Google AI that can be used for a variety of tasks, including text summarization. It can be used to summarize text files, HTML pages, and even code. Bard is a good choice for summarizing text documents that are difficult to summarize with other algorithms, as it can understand the meaning of the text and generate summaries that are more accurate. However, it can sometimes generate summaries that are too long or repetitive.
TextRank is a machine learning algorithm for generating summaries of text documents. It is based on the idea that the importance of a sentence in a document is determined by the number of other important sentences that link to it. TextRank is a good choice for summarizing text documents that contain a lot of information, as it can identify the most important sentences and generate summaries that are concise and informative. However, it can sometimes generate summaries that are not very accurate.
Latent Semantic Analysis (LSA) is a statistical method for extracting the latent semantic structure of a corpus of text. It can be used to summarize text documents by identifying the most important topics. LSA is a good choice for summarizing text documents that contain a lot of information, as it can identify the most important topics and generate summaries that are concise and informative. However, it can sometimes generate summaries that are not very accurate.
Ultimately, the best algorithm for summarizing a text document will depend on the specific document and the desired outcome.TextRank is a machine learning algorithm for generating summaries of text documents. It is based on the idea that the importance of a sentence in a document is determined by the number of other important sentences that link to it. TextRank is a good choice for summarizing text documents that contain a lot of information, as it can identify the most important sentences and generate summaries that are concise and informative. However, it can sometimes generate summaries that are not very accurate.
Latent Semantic Analysis (LSA) is a statistical method for extracting the latent semantic structure of a corpus of text. It can be used to summarize text documents by identifying the most important topics. LSA is a good choice for summarizing text documents that contain a lot of information, as it can identify the most important topics and generate summaries that are concise and informative. However, it can sometimes generate summaries that are not very accurate.
Ultimately, the best algorithm for summarizing a text document will depend on the specific document and the desired outcome.
In my original ChatGPT test, I asked for some additional clarification, so let's do the same with Bard.
If I want to create summaries of web page news articles, which library would work better?
Bard recommended Bard, where ChatGPT recommended Sumy. Here's Bard's answer:
Based on your requirements, I suggest using Bard. It can summarize text documents that are difficult to summarize with other algorithms, as it can understand the meaning of the text and generate summaries that are more accurate. Additionally, it can summarize text documents that contain a lot of information, which is a common characteristic for web page news articles.
It does seem to understand the spirit of the questions I'm asking. My next question in the ChatGPT test was about using Sumy with PHP. Since Bard recommended itself, I'll ask the same question for Bard.
Can you explain how to use Bard from PHP?
Here's the answer. It's pretty solid, and even includes some code snippets to see how it might work.
Obviously, I didn't feed Bard the prompt to write the actual summarize_article routine, because we've already established it can't code.
But contrary even to Google's own statement, it's clear that Bard can definitely assist with coding, at least in terms of locating resources and helping to sift through data to find relevant information in those resources.
Some of Bard's answers were more nuanced than ChatGPT, with it pointing out licensing issues after one question and disadvantages as well as advantages of the various libraries in another. That's a win for Bard.
I'm planning to dive more into how Bard can help solve technical challenges. ChatGPT did quite well with those challenges, so I'm curious about how Bard does. Stay tuned.
That said, from both Sabrina's tests and Google's own FAQ, I expected Bard to crash and burn with anything coding related, and that's not the case. Based on my initial testings, Bard can function as a useful research aid for coding issues, and possibly help save some time.
Now, to be fair, I'll definitely choose ChatGPT over Bard if I need coding help. But Bard isn't completely without game, and I can see it coming in really handy as a source of a second opinion for many different types of research. After all, I've caught ChatGPT in the act of just making stuff up rather than admitting it doesn't know the answer to something. Bard demonstrated real usefulness with the coding examples above, adding value and nuance that ChatGPT missed, even though Bard isn't able to actually write code… yet.