condensedcrab 2 hours ago

I think that the main issue with research in experimental fields in academia is still the throughput issue of running experiments.

Most grad students and professors can cook up reasonable hypotheses, but someone still has to run all of the experiments. The phase space is generally too wide to explore all research directions so human intuition is still needed to figure out what will pan out so the students are fed and the next grant comes in.

nritchie an hour ago

As a scientist, I like to believe that my greatest contribution is my fresh ideas. Maybe AI will be able to do this better than I can. If so, I'm not sure that I really want to be a scientist anymore. TBH, I'm in science because I enjoy the process of thinking, coming up with ideas and testing them. Eliminate any part and it just becomes another tedious job.

  • jszymborski 19 minutes ago

    As Sir Martyn Poliakoff likes to say (greatly paraphrasing), your mind is like a vegetable soup, and stirring it around surfaces all sorts of interesting bits.

    I see this as a great way of stirring things up! Like, once I asked ChatGPT about some ideas I had, and it had hallucinated something which gave me a great idea.

  • spking an hour ago

    AI can "play" the drums better and more precisely than me but I still enjoy the act of playing the drums. AI can or will probably drive a Formula 1 car better than Max Verstappen but he will still enjoy being a driver. I don't see why we can't coexist with machine counterparts.

    • mynameajeff 29 minutes ago

      From all the attempts I've seen in the past few years, we haven't been able to get AI to successfully complete a lap of wheel to wheel racing at full speed on a real track[1], I'm not sure if they're exactly competing for a seat as a pay driver at Haas let alone racing for Red Bull anytime soon.

      [1] https://youtu.be/TPzBH-7ckO0

hnuser123456 an hour ago

We need continuous self learning and improvement at the level of post-training weight updates.

My endeavors in this area show that it is very difficult for an LLM to build up and integrate novel knowledge through conversation. Any new developments that aren't in context, are completely forgotten unless they are continuously kept in context, along with enough reassuring text that the novel knowledge is accurate if it goes against the LLM's training.

Is there any research into 1B-2B sized models that can be continuously finetuned on RTX 3090?

  • j_bum an hour ago

    Excellent point, and I agree with you.

    I’m curious if RAG can solve any of this problem?