more 8
This commit is contained in:
parent
1468056503
commit
d0149dee41
|
@ -41,4 +41,39 @@ Okay so now to explain the attack: the idea is that chatGPT can potentially gene
|
|||
|
||||
I don't really know how likely this is to work in practice, but it's an attack vector that had never even occurred to me before and that's really interesting. Again, I'm not actually sure what the defense against such an attack is. I mostly expect it might fail because I don't know how often you'll actually get the exact same fake library recommended. But does it need to work all the time? In a world where tons of developers are using the same LLM to generate code snippets, maybe it only needs to work 1/10000 times to be worth it.
|
||||
|
||||
Finally in the linkdump portion of this piece: https://lcamtuf.substack.com/p/llms-are-better-than-you-think-at
|
||||
|
||||
This is a short but interesting little article about how "reinforcement learning with human feedback", the secret sauce that makes chatGPT, Alpaca, Vicuna, and the like so good at taking in our prompts and almost magically doing the right thing with them, is easier to game than it may seem at first.
|
||||
|
||||
Okay let's start talking about that piece from [[https://www2.ed.gov/documents/ai-report/ai-report.pdf][Department of Education]] on AI and education. Now, I swear I'm mostly going to have positive things to say about this report /but/ am I going to start off with a nitpick based on this paragraph:
|
||||
|
||||
#+begin_quote
|
||||
AI can be defined as “automation based on associations.” When
|
||||
computers automate reasoning based on associations in data (or
|
||||
associations deduced from expert knowledge), two shifts fundamental to
|
||||
AI occur and shift computing beyond conventional edtech: (1) from
|
||||
capturing data to detecting patterns in data and (2) from providing
|
||||
access to instructional resources to automating decisions about
|
||||
instruction and other educational processes. Detecting patterns and
|
||||
automating decisions are leaps in the level of responsibilities that
|
||||
can be delegated to a computer system. The process of developing an AI
|
||||
system may lead to bias in how patterns are detected and unfairness in
|
||||
how decisions are automated. Thus, educational systems must govern
|
||||
their use of AI systems. This report describes opportunities for using
|
||||
AI to improve education, recognizes challenges that will arise, and
|
||||
develops recommendations to guide further policy development.
|
||||
#+end_quote
|
||||
|
||||
When they're talking about detecting patterns and automating reasoning based on associations they're talking specifically about /machine learning/ not AI broadly. Admittedly, most AI these days *is* machine learning but I think it's still important to note that AI is sort of just the general part of computer science that deals with creating adaptive algorithms that handle situations where exact solutions aren't known. In other words, it's about creating programs that solve problems rather than programmers solving the problem and implementing the solution to /that instance/ in code.
|
||||
|
||||
So why I like this report is that I think they start off on the right track with stuff like this
|
||||
|
||||
#+begin_quote
|
||||
Understanding that AI increases automation and allows machines to do some tasks that only
|
||||
people did in the past leads us to a pair of bold, overarching questions:
|
||||
1. What is our collective vision of a desirable and achievable educational system that
|
||||
leverages automation to advance learning while protecting and centering human agency?
|
||||
2. How and on what timeline will we be ready with necessary guidelines and guardrails, as
|
||||
well as convincing evidence of positive impacts, so that constituents can ethically and
|
||||
equitably implement this vision widely?
|
||||
#+end_quote
|
||||
|
|
Loading…
Reference in New Issue