Click here to view this site's accessibility statement.
Nice review! Originally I wasnt even planning to continue with db-class just wanted to try it. But it turned out I dropped out of ai-class and finished db-class successfully.
ai-class was too much work with lot of deadlines at work and I also enjoyed the camardie in db-class qa forum.
I know ai-class was clear on the prerequisites but despite knowing Bayes theorem etc the Bayes network part was overwhelming for many like me.
The two biggest drawbacks in ai-class for me was I couldnt pace the lectures like db-class and ml-class. Sometimes you dont have time and want to rush through and I could increase the speed in db and ml class but I had no such option in ai.
The other drawback was the style of asking quizzes before explaining the concept. I know some people liked it but I found it very demotivating to keep failing quizzes.
Last quarter I had the opportunity to participate in one of Stanford’s newest education initiatives, db-class and ai-class. Essentially these formed two of the three free courses offered to the general public as well as Stanford students, where the public would be given the opportunity to learn the exact same material and earn a “certificate of completion” at the end. These offerings would be strictly online, with video and course materials evaluating the student’s progress, and Stanford students would benefit from in-class instruction as well as additional material (and a grade!) that would not be possible for the rest of the public. As far as hard numbers go, the statistics are astounding: for db-class, a class that covered an introduction to databases, 90,000 people signed up, 25,000 performed some sort of work, and 6,500 did well enough to earn a certificate. AI-class had 160,000 signed up and apparently “graduated” 23,000, which is simply amazing.
Given the recent focus on technology-based education I thought I’d share my thoughts from being a Stanford student who also signed up to the public versions, and some things about what went well and what could be improved.
AI-class, the introduction to Artificial Intelligence (offered as CS221 at Stanford), was taught by two luminaries in the field of AI, Peter Norvig and Sebastian Thrun. Peter can be quickly identified by his colorful t-shirts. Both work at Google: Peter as Director of Research, Sebastian is a Fellow (gaining notoriety with the Google self-driving car). DB-class (CS145) was taught by Jennifer Widom, who is also very much a leader in the field of databases. The support team for each in-class segment was identical: both had 6 TAs ranging from co-term students to graduates.
The hallmark of each class was the video content that provided instruction as the course proceeded. AI-class took the YouTube route, segmenting lectures into short snippets of content usually no longer than 10 minutes each. The videos were posted as the class proceeded, though the online interface was slow to update – I often had to resort to going to the YouTube channel in order to review video of content that was covered in class. Granted this was in addition to the fact that Stanford records its own channel internally for SCPD students – these were the full-length videos, but the snippets were by far more useful. They outsourced transcription to dotSUB, so there weren’t any particularly erroneous encodings.
Post-video quizzes were used to assess one’s understanding of the material – for simple instruction, these quizzes had answers immediately after pressing the submit button (and were ungraded) while homework questions were graded all at once after a deadline.
Jennifer kept all the videos within the platform, and as such was able to leverage a few nicer features, like speed control and small quizzes embedded within the video stream. Videos here were also typically fairly short, though the occasional 30-minute video was a bit daunting at times. As far as I was concerned there was always at least a week or two of material available to view online ahead of the pace of the class, so I never had to worry about not finding a video.
Homework assignments in AI-class were offered as fill-in-the-blank responses to videos that had questions at the end, just like the quizzes. The deadlines for two homework assignments were pushed back because of scheduling issues (sometimes the assignments weren’t posted in time, for example), making the whole experience a rather frenetic one. Sometimes the questions were ambiguous and led to confusion, though to be fair the instructors made a lot of effort to reduce confusion – sometimes too much.
The DB-class had one fantastic feature that made it a lot more fun to work with databases: a “Command Workbench” that would allow one to test SQLite queries, triggers or transactions within the browser. Jennifer dangled Jamba Juice cards for anyone who could find bugs, and it was remarkable to note that there was I think just one documented case of anyone getting those cards, because there weren’t any issues to uncover. The sheer volume of the homework assignments (and the fact that most of them had some sort of mechanism to check whether or not your answers were correct) made the whole exercise a little too pedantic at times: I’d be like “all right, all right! I understand transactions already!”
To be brutally honest, the only good thing about taking AI-class in person was the fact that the professors were physically available to answer questions, but even then the opportunity would not present itself so easily, as both professors have their own busy lives outside of Stanford. This is something I’ve felt undermine CS276, Information Retrieval, as well– in the class one professor worked at Yahoo, the other at Google, and they were very hard to get a hold of. The attendance in the CS221 dwindled significantly over the course of the quarter, which was very disappointing to see. As an added twist to the new offering, students in the class were given a Pac-man based project to do (which the students in the AI-class did not do). Unfortunately this too was fraught with problems, starting with faulty starter code and outdated instructions.
Jennifer did try to address that issue by removing in-class instruction altogether and using the time instead for industry speakers to talk or to offer workshops for the various topics we were covering. Additionally, attendance in some classes were optional, and attendance in the required ones were checked using a rudimentary quiz system where students would answer one really simple question about the previous required class. We had engineers come from Facebook, Twitter and Walmart Labs, as well as some of the departments on campus. Some of the talks were informative, while others came across as sales pitches for hiring talent. The talks towards the end of the quarter eventually became more and more relevant, including ones on Hadoop and HBase, which I found interesting. Jennifer also maintained office hours on campus, which was nice towards the finals period.
The exams for DB-class were offered the sit-in type with paper and 6 sides of 8″x11″ sheets of paper we could use for any notes we wanted to bring in. I found both fairly difficult, which might be a reflection of my perception that my online performance was a good indicator of my paper-based performance, which clearly was not the case. I noted that the public offering had a 20-question quiz-style final exam, which was a really neat (if not simplified) implementation. Simplified, in that the Stanford course had the full 2 hours to take advantage of.
For the AI-class the midterm was expected to be an online (homework-style) exam, but ended up being a PDF that we were given 24 hours to complete. I kid you not when I share the following question from the midterm:
The answers were entered into a text file and mailed back. The final was even more in disarray, though this time it was very much in the style of the homework. We were effectively given about 54 hours to complete: it was first released a day early and available on Thursday, December 15, due 1PM Saturday. It was pushed back an extra day. I believe the online class also had extra time allotted somewhat arbitrarily. Both exams were very much heavy on concepts of probability, and I felt there was plenty of opportunity to explore more interesting aspects of AI, like POMDPs (not a single question on them). Both the public and the Stanford students were given the same final exam in form and content.
There were some very interesting differences that emerged from both incarnations of each class, as I am sure the teaching staff can attest to. From a student’s perspective, here are some of the things I noted:
a) The online AI-class community benefited from aiqus.com, a StackOverflow-style forum that let people vote on questions they had. Certainly the students appeared to have taken significant advantage of it, though the finals period was interesting because students would ask clarification questions that would be immediately flagged as non-answerable simply because they related to the final. Stanford’s AI-class Q&A forum was poorly maintained and was certainly not a resource I would have used to turn to in case of need.
b) The online DB-class seems to also have taken very much advantage of the forum unlike the Stanford equivalent, which was identical in form. Indeed, one of its more prolific answerers, someone by the name of Amy Cunningham, even had a mock marriage proposal asked of her because she was very good at quickly answering prompts from students in the public offering. The TAs did a good job for the Stanford students as well.
c) A neat feature both (online) classes implemented was a sort of virtual office hours – Jennifer called them “fireside chats” – and in both the professors addressed common questions or concepts that students would send in. Ironically I found that one of Jennifer’s fireside chats directly addressed a question that came up on the Stanford midterm.
Sebastian and Peter tried using Google Hangouts for one of their earlier office hours – it seems that this wasn’t as successful as it could have been:
Despite the shortcomings that made AI-class significantly less inspiring than it could have potentially been (given, perhaps, the high expectations), both classes marked a unique step in the right direction. Both courses were the first kind to actively engage students and professors in the learning process. Both had basic and advanced tracks that would let students cover material at their own pace. Jennifer made a strong effort to ensure that the course made its own mark outside the online instruction, and ultimately I think this kind of knowledge transmission is effective: learn the basics at your own pace, using video and online material, and learn the interesting, real-world applications in class. I think everyone would agree that hearing Sebastian share stories and video about Stanley was a nice touch, for example. I think what was deeply disappointing about the AI-class was that we as Stanford students seemed stuck in the basic track – in an effort to make the education globally accessible, the course made it less interesting, less engaging for those who had made it to Stanford to learn in the same room as those who had transformed the AI landscape.
Truthfully, I think a public-offering course model as such cheapens the learning experience for students at Stanford. Make no mistake, the efforts of these professors and their teaching team is monumental and should be applauded, but the scale and the demand for these courses makes it almost impossible for them to provide a stimulating experience for everyone. Is it worth $8,000 in tuition for me to take AI-class and DB-class in person? I really don’t think so. Is it worth it at $0? Absolutely. How can you not say no? Head on over to the one’s I’m looking forward to starting in February.