5 Comments
User's avatar
Alia el banna's avatar

As a uni professor what I struggle the most with now is this question you pose “Are our kids' teachers adapting fast enough to the ubiquity of this type of technology, so that grades reflect what kids have learned and not their proficiency in using this type of tools?”

We need to know what exactly are we assessing! We can go about this in many ways I am sure: maybe change assessments to something where AI cannot be used, an in class presentation for example where we can discuss arguments presented to better gauge learning or learn ourselves how to detect AI generated content!

Expand full comment
Noelia Amoedo's avatar

Agreed! Maybe this is wishful thinking, but this could turn into an opportunity to rethink the whole system, and really focus on how to ensure students actually understand, and can think critically. Maybe the existence of these tools gives us the perfect excuse to make changes that we should make anyway?

Expand full comment
Alia el banna's avatar

I agree. Higher education has to adapt to changes in the learning environment. And the universities/programs that accept this need are definitely leading the way.

Expand full comment
JJ Mata's avatar

My opening statement is already going to firmly plant me on one side (the unpopular one, I suspect) as it is my strong belief that *all* content is human-generated content. Machines just repeat patterns humans have created (by looking at statistical correlations) yet the patterns they copy are only possibly human at origin. Even now that "synthetic data" is a thing, all synthetic tokens originated in human-generated data at some point in the layer cake of training sets.

They do mix up content in surprising, some say "emergent" new patterns. But even those come from correlations that were at one point human and simply got reflected into a matrix of statistical weights.

The core question for me is: are we willing to place a higher (market) value in the processing done by humans? I surely would hope so, as long as said output improves upon what can be generated by a machine with lower marginal cost. Putting a higher value in "direct from human" content also means the bar for quality content has all of a sudden been raised, because generating derivative "art" is no longer as complicated as it used to be and anybody can aspire to make "art" just like anybody thinks themselves the next Ansel Adams when it is in fact machine learning fixing the horrible exposure of their landscape shots.

What's truly hard to come to grips with is that we are, after all, just stochastic parrots in one way or another. What makes us special, as biological organisms? What is the so-called soul? Let me go ask Claude ... BRB!

Expand full comment
Noelia Amoedo's avatar

Thanks so much for engaging in the conversation, JJ. I’d like to think there are no “sides” but only different perspectives, all enriching :-) You bring up some fascinating points!

I agree that as long as humans have generated the training data, whatever an AI spits out may still be considered human in its origin. I will be careful to talk about “direct from humans” content instead of human-generated content from now on.

Are humans just stochastic parrots? Do we just talk and write without truly understanding what we are saying? I guess it depends on what “truly understanding” means. We certainly behave like stochastic parrots from time to time, simply repeating what we hear (maybe too often). But we are capable of saying things we understand, and that understanding is the result of everything we have lived through and what we have heard and read. Now, what we understand is only a very small percentage of what is going on in our brains (only something like 5% of the activity in our brain is conscious) and the more we know the more we realize there is even more that we don’t know. So yes, there may be nothing that we “truly” understand so we may be stochastic parrots in that sense. All that being said, and knowing as little as I know, I feel that being able to identify what comes directly from humans vs machines may put us in a better position to answer your critical question "what makes us humans special?", while not having a clear downside. So, I would not let that go while we still have time.

Expand full comment