Final Projects From My Spring MOOCs

Spring has come and gone, and with it another wave of MOOC experiences.   This time, I also had the chance to submit a number of final project for peer assessments.  I am still skeptical about the reliability of peer assessed assignments, and I even attempted to use my teammates from Leading Strategic Innovation in Organizations to see the variability in our scores when we submitted the same assignments.

For example, we all submitted the same video for our final project: Innovations Final (v0). However, I was the only one of three team members who received the maximum possible score for the submission, as well as positive feedback:

Since the grades are relatively arbitrary and significantly based on luck, I won’t share my results, suffice it to say I received “passing marks” on the 4 peer assessed final projects I did submit:

Overall, all four of these classes were well worth the time spent. This Summer, I’ll hopefully add a few more projects to my library, including hopefully a Bitcoin Selfstarter if I can make it through all of Startup Engineering. Fingers crossed.



Fairness of MOOC Peer Assessments

A few weeks ago, I received results for my first Peer Assessment in the course Data Analysis on Coursera. I only had computer-graded quizzes in my previous courses, so I was interested to see the results. Personally, I and many others are fairly skeptical about the fairness and accuracy of these Peer Assessments. My reasons for skepticism:

  1. What prevents a student from sabotaging others’ grades?
  2. Conversely, what prevents a student from giving others full scores?
  3. What do your peers REALLY know about the subject?

I still don’t have any clear answers, but I do feel as though I myself have sabotaged others a bit. I received a grade of 82 out of 85, which also happens to be what I gave myself during the self-evaluation phase. But, I was required to grade four peers, who respectively got 58, 17, 57, and 53 from me. To try and justify that they were truly subpar submissions, I even graded an additional “optional” assignment in hopes that not all submissions were so bad. This last paper was much better; I gave it a 78.

More importantly though, what do most of the students (including myself) really know about the topic? For example, some of the papers used more advanced data analysis techniques, such as k-fold cross validation and boosting, but received worse grades. Probably because a majority of the students don’t understand what these techniques are, if they are better, and how they should be properly applied.

A slight confidence booster is that some professors do take notice of the discrepancies between normal grades and the Peer Assessments. In the course Developing Innovative Ideas for New Companies, the average grade for quizzes was 82% while the average for the Peer Assessment was 65%, thus the professor raised everyone’s grade by 17%. I’m not totally sold that this is the correct way to adjust, but it is comforting to know that professors are aware and involved in the process.

For those who are interested, my 4-page Data Analysis assignment, Lending Club Interest Rates are closely linked with FICO scores and Loan Length, is available, and it reveals nothing new or interesting to the world, but was good practice in building linear regressions. And, it meets the minimum requirements of the assignment.

Here are some other Data Analysis papers and their scores, for reference/comparison: