Re: [Aisa.circuli] [SPAM]Re: "Self driving labs" e SALAMI "p…

Poista viesti

Vastaa
Lähettäjä: Daniela Tafani
Päiväys:  
Vastaanottaja: 380°, aisa.circuli@inventati.org
Aihe: Re: [Aisa.circuli] [SPAM]Re: "Self driving labs" e SALAMI "peer" reviewers
Buongiorno.

Il 06/07/2023 17:27, 380° via Aisa.circuli ha scritto:
>> Once the literature review is complete, scientists form a hypothesis to be tested. LLMs at their core work by predicting the next word in a sentence, building up to entire sentences and paragraphs. This technique makes LLMs uniquely suited to scaled problems intrinsic to science’s hierarchical structure and could enable them to predict the next big discovery in physics or biology.
> Aaaaaaaaaaaaaaahhhhhhhhhhhhhhhhhhhhh
>
> Sono l'unico in questa lista a cui viene voglia di urlare di fronte a
> simili boiate?!?


Sarebbe bello avere il tempo di mettersi a urlare per le storie sui
self-driving labs,
ma non si può, perché la gara a chi la spara più grossa continua
e un minuto dopo arriva perciò il ricercatore automatico (l'"automated
researcher"),
un ricercatore SALAME che provvederà nientemeno che ad allineare i
valori degli altri SALAMi a quelli umani
(per cui la novella disciplina si chiama, come in un film Marvel,
"superallineamento"):

https://openai.com/blog/introducing-superalignment
<https://openai.com/blog/introducing-superalignment>

/"How do we ensure AI systems much smarter than humans follow human
intent?"/

/"/Our goal is to build a roughly human-level automated alignment
researcher <https://openai.com/blog/our-approach-to-alignment-research>.
We can then use vast amounts of compute to scale our efforts, and
iteratively align superintelligence.

To align the first automated alignment researcher, we will need to 1)
develop a scalable training method, 2) validate the resulting model, and
3) stress test our entire alignment pipeline:

 1. To provide a training signal on tasks that are difficult for humans
    to evaluate, we can leverage AI systems to assist evaluation of
    other AI systems <https://openai.com/research/critiques> (/scalable
    oversight). /In addition, we want to understand and control how our
    models generalize our oversight to tasks we can’t
    supervise (/generalization)/.
 2. To validate the alignment of our systems, we automate search for
    problematic behavior
    <https://www.deepmind.com/blog/red-teaming-language-models-with-language-models> (/robustness)
    /and problematic internals (/automated interpretability/
    <https://openai.com/research/language-models-can-explain-neurons-in-language-models>).
 3. Finally, we can test our entire pipeline by deliberately training
    misaligned models, and confirming that our techniques detect the
    worst kinds of misalignments (/adversarial testing/)."


//

Un saluto,
Daniela