Thanks to visit codestin.com
Credit goes to programming.dev

Riskable

Father, Hacker (Information Security Professional), Open Source Software Developer, Inventor, and 3D printing enthusiast

  • 43 Posts
  • 2.23K Comments
Joined 3 years ago
Codestin Search App
Cake day: June 23rd, 2023

Codestin Search App


  • If only they would actually spend it on things that benefitted the economy instead of:

    • Buying land/houses
    • Buying successful businesses in order to loot them (typical Private Equity moves)
    • Lobbying the government to cut their taxes and reducing social safety nets

    I firmly believe that a big reason why wealth is consolidating so quickly is because there’s not much for the rich to spend their money on these days. It used to cost a fortune to care for a great big mansion and the surrounding grounds (and you’d have a local economy built on that). These days, it’s not even a rounding error in their monthly income from interest and dividends and you don’t even need to hire very many people.

    There used to be actual luxury services the rich needed to pay for if they wanted to appear better off than the riffraff. Now the common man has access to all those luxuries and more.

    Once you’ve got a couple private jets and a yacht (that you never spend any time on), what the fuck do you need all that money for‽ Even they don’t know what to do with it!

    Larry Ellison—the original tech villain billionaire—bought a huge private island in Hawaii. He’s never there. It just… Sits there. With a small staff and places for his yachts. It’s like he wants to go down in history as one of the biggest, greedy scumbags of all time.




  • Riskabletopolitics @lemmy.worldWho Controls AI Exactly?
    Codestin Search App
    Codestin Search App
    English
    Codestin Search App
    2
    ·
    2 days ago

    No, a .safetensors file is not a database. You can’t query a .safetensors file and there’s nothing like ACID compliance (it’s read-only).

    Imagine a JSON file that has only keys and values in it where both the keys and the values are floating point numbers. It’s basically gibberish until you go through an inference process and start feeding random numbers through it (over and over again, whittling it all down until you get a result that matches the prompt to a specified degree).

    How do the “turbo” models work to get a great result after one step? I have no idea. That’s like black magic to me haha.






  • Riskabletopolitics @lemmy.worldWho Controls AI Exactly?
    Codestin Search App
    Codestin Search App
    English
    Codestin Search App
    5
    Codestin Search App
    1
    ·
    2 days ago

    Or, with AI image gen, it knows that when some one asks it for an image of a hand holding a pencil, it looks at all the artwork in it’s training database and says, “this collection of pixels is probably what they want”.

    This is incorrect. Generative image models don’t contain databases of artwork. If they did, they would be the most amazing fucking compression technology, ever.

    As an example model, FLUX.dev is 23.8GB:

    https://huggingface.co/black-forest-labs/FLUX.1-dev/tree/main

    It’s a general-use model that can generate basically anything you want. It’s not perfect and it’s not the latest & greatest AI image generation model, but it’s a great example because anyone can download it and run it locally on their own PC (and get vastly superior results than ChatGPT’s DALL-E model).

    If you examine the data inside the model, you’ll see a bunch of metadata headers and then an enormous array of arrays of floating point values. Stuff like, [0.01645, 0.67235, ...]. That is what a generative image AI model uses to make images. There’s no database to speak of.

    When training an image model, you need to download millions upon millions of public images from the Internet and run them through their paces against an actual database like ImageNET. ImageNET contains lots of metadata about millions of images such as their URL, bounding boxes around parts of the image, and keywords associated with those bounding boxes.

    The training is mostly a linear process. So the images never really get loaded into an database, they just get read along with their metadata into a GPU where it performs some Machine Learning stuff to generate some arrays of floating point values. Those values ultimately will end up in the model file.

    It’s actually a lot more complicated than that (there’s pretraining steps and classifiers and verification/safety stuff and more) but that’s the gist of it.

    I see soooo many people who think image AI generation is literally pulling pixels out of existing images but that’s not how it works at all. It’s not even remotely how it works.

    When an image model is being trained, any given image might modify one of those floating point values by like ±0.01. That’s it. That’s all it does when it trains on a specific image.

    I often rant about where this process goes wrong and how it can result in images that look way too much like some specific images in training data but that’s a flaw, not a feature. It’s something that every image model has to deal with and will improve over time.

    At the heart of every AI image generation is a random number generator. Sometimes you’ll get something similar to an original work. Especially if you generate thousands and thousands of images. That doesn’t mean the model itself was engineered to do that. Also: A lot of that kind of problem happens in the inference step but that’s a really complicated topic…