Elon ******* Musk

11516171820

Comments

  • Ruth wrote: »
    Don't know why he wants to go to Mars given how mercurial he is.

    And enjoys Saturnalia.
  • ArethosemyfeetArethosemyfeet Shipmate, Heaven Host
    Ruth wrote: »
    Don't know why he wants to go to Mars given how mercurial he is.

    Maybe it's the moons of Mars he's after?
  • Ruth wrote: »
    Don't know why he wants to go to Mars given how mercurial he is.

    good one.

  • PigletPiglet All Saints Host, Circus Host
    ... If [Tesla] folds, he is in very deep trouble ...

    I rather doubt that; I should think that even without Tesla, he's probably got the sort of financial resources you and I could only dream of.
  • HugalHugal Shipmate
    Piglet wrote: »
    ... If [Tesla] folds, he is in very deep trouble ...

    I rather doubt that; I should think that even without Tesla, he's probably got the sort of financial resources you and I could only dream of.

    Maybe but that amount of loss would hurt even him
  • ArethosemyfeetArethosemyfeet Shipmate, Heaven Host
    Piglet wrote: »
    ... If [Tesla] folds, he is in very deep trouble ...

    I rather doubt that; I should think that even without Tesla, he's probably got the sort of financial resources you and I could only dream of.

    There's also the fact that, outwith child support, Musk doesn't have massive expenses. He's not Bezos buying yachts and booking an entire city for a wedding, or Trump gold plating every damn thing.
  • PigletPiglet All Saints Host, Circus Host
    Fair point - Musk seems more like the troll who sits on his golden hoard, watching it grow.
  • The thing is, a lot of Musks wealth is leveraged. So he has borrowed against his other holdings, and the banks have lent him suitcases full of money on the back of this.

    So losing Tesla would mean that a) his security for his loans has gone and b) the confidence that the financial markets have in him might vanish.

    So his wealth is not in yachts and gold plate. It is in his businesses - which means it is in the confidence that the markets have in him. A dragon hoard has value without the dragon. A Musk Bubble has no value without Musk.

    I mean, there is a chance that without Tesla he would still be wealthier than I would ever be. But there is a possibility that without Tesla, he would be bankrupt.
  • PigletPiglet All Saints Host, Circus Host
    Thanks for the explanation, SC - my understanding of high finance is sketchy at best!
  • Piglet wrote: »
    Thanks for the explanation, SC - my understanding of high finance is sketchy at best!

    I understand you have people to deal with all that for you.
  • PigletPiglet All Saints Host, Circus Host
    It's their day off ... :mrgreen:
  • So now Musk is the World's 'richest' person. Hmm ... Our Lord's parable about a fellow and building bigger barns springs to mind.
    But, honestly, what does all that money mean?
  • ArethosemyfeetArethosemyfeet Shipmate, Heaven Host
    RockyRoger wrote: »
    So now Musk is the World's 'richest' person. Hmm ... Our Lord's parable about a fellow and building bigger barns springs to mind.
    But, honestly, what does all that money mean?

    Power. Influence. The ability to wade into anything he chooses and fuck with it without consequence. He can do or say almost anything and evade any meaningful impact.
  • Golly ... poor chap!
  • RockyRoger wrote: »
    Golly ... poor chap!

    Well, quite - but (to misquote Miranda) O poor old world, that has such people in't...
  • LouiseLouise Epiphanies Host
    If you are for any reason still on X - run, dont walk - come off now and delete the app and any content you had on it.

    Grok is generating scantily clad sexualised images of minors and even seeing those on your screen can count as a criminal offence depending where you live.

    It's got a function now that can take any photo it's given and undress the person in it to micro bikini or equivalent level. So for example if you tweeted a photo of your kids or grandkids - that can be taken, pornified and widely publicly shared. Any woman can be harassed this way without consent (anyone can be harassed but women and girls seem to be the most common targets)


    Our government and many other organisations still have accounts and post there. These are very same people who brought in the recent incompetent but onerous age-verification legislation on grounds of child safety, which really does naff all to keep children safe, but they seem to be doing nothing to stop or discourage Musk and keep using his site.
  • DoublethinkDoublethink Admin, 8th Day Host
    FFS !
  • NicoleMRNicoleMR Shipmate
    I used to play around with the idea of using Twitter/X. Now I'm super glad I never did.
  • .. and the press is covering it in a way that anthropomorphises the algorithm, which should constitute journalistic malpractice in itself.
  • Louise wrote: »
    It's got a function now that can take any photo it's given and undress the person in it to micro bikini or equivalent level. So for example if you tweeted a photo of your kids or grandkids - that can be taken, pornified and widely publicly shared. Any woman can be harassed this way without consent (anyone can be harassed but women and girls seem to be the most common targets)

    I find myself in the unfamiliar position of wanting to defend Elon Musk here.

    This is not a "feature" of X, or of Grok. There is no "bikinify" button.

    AI systems, including Grok, do have image editing features. Anyone can take an image from any source, feed it to an AI, and ask it to make changes. The fact that X now has direct access to Grok just makes it marginally easier to do.

    I won't trouble you with the image of me in a bikini that Google's Nano Banana made for me. Suffice it to say that it took certain liberties with what I would actually look like in such a garment. It is unlikely to have learned from many images of men in bikinis...

    Ideally, publicly available AI engines have filters to prevent users from producing obscene images (although people in beachwear wouldn't generally be viewed as obscene). Nothing will stop a privately-held AI engine from being run without such guardrails. There are plenty of deep fake tools around. If you told me that you could buy an AI porn compositor on the dark web that would take images or videos of any people you wanted, and generated fake video of them engaged in various explicit activities, I would be completely unsurprised.

    There's nothing special about X here. Images posted to X are really no more vulnerable to this sort of manipulation than any other publicly-posted image.
  • DoublethinkDoublethink Admin, 8th Day Host
    edited January 2
    The lack of effective moderation on X probably means they don’t get taken down, and given how post promotion on X works - may mean you see them in your feed even if you haven’t searched for. As @Louise points out this can get you into serious trouble in some jurisdictions, including the UK.
  • DoublethinkDoublethink Admin, 8th Day Host
    edited January 2
    I believe the sort of thing reported in this BBC article is also illegal under the Online Safety Act in the UK.
    In a statement to the BBC, Ofcom said it was illegal to "create or share non-consensual intimate images or child sexual abuse material" and confirmed this included sexual deepfakes created with AI.

    It said platforms such as X were required to take "appropriate steps" to "reduce the risk" of UK users encountering illegal content on their platforms, and take it down quickly when they become aware of it.
  • SipechSipech Shipmate
    If you can train an algorithm to generate inappropriate pictures, you should be training it to spot and filter out such images.
    If the image generation is technically advanced, but the protection isn't, that is the result of design choice.
  • DoublethinkDoublethink Admin, 8th Day Host
    You can bet if AI generated bomb recipes or spree murder attack plots were being uploaded government regulators would be down on them like a ton of bricks - but abuse of women is never prioritised in the same way.
  • DoublethinkDoublethink Admin, 8th Day Host
    edited January 2
    Sipech wrote: »
    If you can train an algorithm to generate inappropriate pictures, you should be training it to spot and filter out such images.
    If the image generation is technically advanced, but the protection isn't, that is the result of design choice.

    Tech seems to have a free pass to release fucked up betaware to the general public - we don’t accept it with medicine or cars, why are we expected to accept it with AI.
  • LouiseLouise Epiphanies Host

    How Grok's sexual abuse hit a tipping point -
    Nonconsensual deepfakes on X are nothing new, but now it's built into the platform.


    Users discovered that Grok, the platform’s built-in AI chatbot, will create fake sexually-suggestive edits of real photos of women and girls on request—a common form of nonconsensual deepfakes.


    The main difference with X, and the reason that deepfake abuse is coming to head now, is two-fold. First, Musk dissolved Twitter’s old Trust and Safety Council that addressed child exploitation on the platform, and he fired the vast majority of engineers working to address these kinds of problems. Second, Musk introduced Grok to the platform.

    At the exact same time that AI CSAM and other sexually abusive material was running rampant on X, Musk’s AI team created the ability for any user to prompt Grok into editing images posted by other people on the platform. Anyone could have predicted what would happen next.


    The reality is that X has not taken this as seriously as one of Grok’s user-prompted posts might seem to suggest. Instead, Musk has encouraged, laughed at, and praised Grok for its ability to edit images of fully-clothed people into bikinis. “Grok is awesome,” he tweeted while the AI was being used to undress women and children, make it look like they’re crying, generate fake bruises and burn marks on their bodies, and write things like “property of little st james island,” which is a reference to Jeffrey Epstein’s private island and sex trafficking.
    ...

    They’ve known about this happening the entire time and they made it even easier to inflict on victims. They are not investing in solutions, they are investing in making the problem worse

    (Bold mine)


    Making the entry level absolutely trivial on a badly, badly policed space where the owner encourages this *is* a new level of bad - especially for women and in particular for those whose employer/ employment forces them to use that platform or makes it very hard to avoid for professional reasons eg. because government departments or people they need to cover are on there

    And if you want to trivialise that, I refer you to Arkell v Pressdram.
  • You can bet if AI generated bomb recipes or spree murder attack plots were being uploaded government regulators would be down on them like a ton of bricks - but abuse of women is never prioritised in the same way.

    The other side of it is that the current government of the UK is very wary of angering the tech giants for a number of reasons; not least the ire of Trump - but doubtless their own growth plans and the chances for sinecures afterwards also plays a part.
  • DoublethinkDoublethink Admin, 8th Day Host
    edited January 2
    Stuff the government, this already illegal and therefore it ought to be possible to pressure regulator, police and prosecutors into enforcing the existing law.
  • chrisstileschrisstiles Hell Host
    edited January 2
    Stuff the government, this already illegal and therefore it ought to be possible to pressure regulator, police and prosecutors into enforcing the existing law.

    Come on now.

  • DoublethinkDoublethink Admin, 8th Day Host
    I live in hope.
  • Sipech wrote: »
    If you can train an algorithm to generate inappropriate pictures, you should be training it to spot and filter out such images.
    If the image generation is technically advanced, but the protection isn't, that is the result of design choice.

    "I am thinking about buying a new bikini. This is what I look like, and these are the bikinis I am considering. Show me how I would look posing in each bikini."

    That sounds like a perfectly reasonable use of an AI image tool, doesn't it?

    "This is my new co-worker. Show me how she would look posing in this bikini."

    That sounds like sexual harassment.

    But it's the same tool, and the same operations.
  • You can just block both operations (in the same way that Google Images no longer allows you to search for matching human images).
  • CrœsosCrœsos Shipmate
    Sipech wrote: »
    If you can train an algorithm to generate inappropriate pictures, you should be training it to spot and filter out such images.
    If the image generation is technically advanced, but the protection isn't, that is the result of design choice.
    "I am thinking about buying a new bikini. This is what I look like, and these are the bikinis I am considering. Show me how I would look posing in each bikini."

    That sounds like a perfectly reasonable use of an AI image tool, doesn't it?

    "This is my new co-worker. Show me how she would look posing in this bikini."

    That sounds like sexual harassment.

    But it's the same tool, and the same operations.

    I'm skeptical of being able to offload liability through automation. What I mean is it's instructive to ponder a situation where Xitter offered a service where a human being* would alter a digital image to a user's specifications. I don't think any of us would hold Xitter blameless if the people employed to do that job routinely produced child pornography and Xitter claimed there was nothing they could do about it. This would be true even especially if the end user specifically requested child porn.

    I don't think it should be that easy for corporations to avoid culpability.


    *Given the degree to which a lot of supposed "artificial intelligence" turns out to be mechanical Turks, supposed automata secretly controlled by human beings, I will not rule out this possibility for Grok.
  • Stuff the government, this already illegal and therefore it ought to be possible to pressure regulator, police and prosecutors into enforcing the existing law.

    The government have said that they can't even stop posting on X because .. well all the hard working families who rely on it as a primary news source:

    https://bsky.app/profile/jim.londoncentric.media/post/3mbpa2jwmqk22

    (Starmer speaking in Parliament - if he honestly believes that, then he's supremely out of touch).
  • DoublethinkDoublethink Admin, 8th Day Host
    edited January 6
    Conversely, Ofcom seem to have stirred slowly into contemplating action.
  • HugalHugal Shipmate
    From what I’m can tell part of the problem is Grock is programmed not bother about “woke” things. What it has come up with before this was bad enough.
  • Crœsos wrote: »
    I'm skeptical of being able to offload liability through automation. What I mean is it's instructive to ponder a situation where Xitter offered a service where a human being* would alter a digital image to a user's specifications. I don't think any of us would hold Xitter blameless if the people employed to do that job routinely produced child pornography and Xitter claimed there was nothing they could do about it. This would be true even especially if the end user specifically requested child porn.

    On the other hand, if Adobe sells you a copy of Photoshop, and you use it to make child porn, it doesn't seem reasonable to assign Adobe the blame. The extent to which an AI image editing/generation service is more like contract work or more like the provision of a software tool is debatable, I think.

    There are a whole range of images that could be sexually objectifying and would be clear evidence of sexual harassment if you made and distributed them featuring a colleague's features, but are not pornographic, including many of the beachwear images we have been discussing.

    To be clear, if I want to make an image of me in a bikini because I want to "virtually try it on" with a view to potentially purchasing it, that seems like a perfectly reasonable thing for me to want to do, and it seems like a reasonable thing for some AI service to be willing to do for me.

    If I make the same image featuring a new employee and post it in the break room, I should be fired for sexual harassment, and perhaps I should face criminal charges.

    The solution of @chrisstiles is that an AI service should just refuse to make bikini pictures, on the grounds that I might be intending to harass someone with them.

    I would like to say that the solution was to prosecute the harassment, rather than the technology, but that becomes challenging when the harassment is anonymous.
  • CrœsosCrœsos Shipmate
    Crœsos wrote: »
    I'm skeptical of being able to offload liability through automation. What I mean is it's instructive to ponder a situation where Xitter offered a service where a human being* would alter a digital image to a user's specifications. I don't think any of us would hold Xitter blameless if the people employed to do that job routinely produced child pornography and Xitter claimed there was nothing they could do about it. This would be true even especially if the end user specifically requested child porn.

    On the other hand, if Adobe sells you a copy of Photoshop, and you use it to make child porn, it doesn't seem reasonable to assign Adobe the blame. The extent to which an AI image editing/generation service is more like contract work or more like the provision of a software tool is debatable, I think.

    The fact that Xitter can't/won't sell people a stand alone copy of Grok makes it seem a lot more like contract work. That's how we'd regard any other process that's completely internal to Xitter, aside from user input.
  • chrisstileschrisstiles Hell Host
    edited January 6
    The solution of @chrisstiles is that an AI service should just refuse to make bikini pictures, on the grounds that I might be intending to harass someone with them.

    Well, for a start it's also generating pictures with less covering than a bikini, which yes, it should absolutely be refusing to generate because of just this scenario.
    I would like to say that the solution was to prosecute the harassment, rather than the technology, but that becomes challenging when the harassment is anonymous.

    If you are creating and publishing CSAM you need to be in jail, you don't get pass the responsibility onto one someone else simply because they asked you to create and publish CSAM.
  • The RogueThe Rogue Shipmate
    Is the problem with Grok less about the generation of such images and more about the fact that once they are generated they are then freely available to anyone?
  • The Rogue wrote: »
    Is the problem with Grok less about the generation of such images and more about the fact that once they are generated they are then freely available to anyone?

    That's not a Grok feature per se. You are not obliged to publicly share the images you make with Grok. It's just that the Grok / X integration makes it easy for someone to see an image, have Grok edit the image, and tweet (is there a similar verb now it's called X?) the resulting picture. Publishing the images is still a choice that a human makes.
  • Crœsos wrote: »
    The fact that Xitter can't/won't sell people a stand alone copy of Grok makes it seem a lot more like contract work. That's how we'd regard any other process that's completely internal to Xitter, aside from user input.

    I'm not convinced. A lot of traditional software has been moving towards a cloud model. I don't think that the fact that I can't buy an offline version of Onshape (a CAD application) makes it like contract work.

    The difference, it seems to me, between something like Onshape (which is basically a traditional-looking CAD program that runs in the cloud, and uses the user's browser as an interface) and an AI image generation tool is the way that I deliver instructions, and the flexibility that the tool has to interpret those instructions.

    If I'm running a traditional image editing tool, I'm changing the color of pixels, I'm shading and distorting and smearing and doing various other things to the image, but it's not terribly different from if I was standing at a real canvas with paints and brushes. Although even traditional tools like Photoshop now have AI-driven generative fills, that can, for example, invent a background to replace a person that you've removed from an image with.

    Am I doing something fundamentally different if I ask Grok to switch out my red shirt for a blue one, rather than doing it in Photoshop?
  • CrœsosCrœsos Shipmate
    edited January 7
    If I'm running a traditional image editing tool, I'm changing the color of pixels, I'm shading and distorting and smearing and doing various other things to the image, but it's not terribly different from if I was standing at a real canvas with paints and brushes. Although even traditional tools like Photoshop now have AI-driven generative fills, that can, for example, invent a background to replace a person that you've removed from an image with.

    Am I doing something fundamentally different if I ask Grok to switch out my red shirt for a blue one, rather than doing it in Photoshop?

    Yes. You're offloading the work of actually doing so to whoever programmed Grok.
  • Crœsos wrote: »
    If I'm running a traditional image editing tool, I'm changing the color of pixels, I'm shading and distorting and smearing and doing various other things to the image, but it's not terribly different from if I was standing at a real canvas with paints and brushes. Although even traditional tools like Photoshop now have AI-driven generative fills, that can, for example, invent a background to replace a person that you've removed from an image with.

    Am I doing something fundamentally different if I ask Grok to switch out my red shirt for a blue one, rather than doing it in Photoshop?

    Yes. You're offloading the work of actually doing so to whoever programmed Grok.

    I don't think the lines are so bright. Photoshop, for example, comes with all kinds of canned filters that achieve various aims, including filters to re-colorize images in various ways, and as I noted AI-driven background fills.

    In some sense, every tool we use is "offloading the work to the person who built the tool". If I dig a hole with a shovel, am I offloading part of the work to the blacksmith, because it's easier for me to dig a hole with a shovel than with my bare hands? What if I switch out my shovel for a backhoe? Am I now offloading the work to the manufacturer of the backhoe?

    What if I buy a "smart" backhoe with a one-button automated hole-digging feature.

    I'm still choosing where the holes go. If I tell the tool to dig a hole here, and it digs up my neighbor's flower beds, that's not the tool's fault: that's my fault.

    If I tell the tool to dig some holes, and it goes and drives through my neighbor's front windows instead, then that's a flaw in the tool.



  • DoublethinkDoublethink Admin, 8th Day Host
    I think the issue is it is a blindingly obvious way to break the law, in a way that harms women, in a public, poorly moderated space.

    If I run a bar and serve whiskey to five year olds, I would be prosecuted. They bought it would not be a defence. They are reportedly, running their business in a dangerous way - it is gross negligence to allow this to happen on your premises, to facilitate it happening.
  • CrœsosCrœsos Shipmate
    What if I buy a "smart" backhoe with a one-button automated hole-digging feature.

    I'm still choosing where the holes go. If I tell the tool to dig a hole here, and it digs up my neighbor's flower beds, that's not the tool's fault: that's my fault.

    If I tell the tool to dig some holes, and it goes and drives through my neighbor's front windows instead, then that's a flaw in the tool.

    Given the trouble Tesla has had with its self-driving software, Elon Musk has not earned any benefit of the doubt on questions of user vs. design when it comes to fuck ups.

    In a related matter, the Financial Times has a feature on Who's who at X, the deepfake porn site formerly known as Twitter [ gift link ]. The profile photos have been altered in a specific, subtle way. See if you can notice. [Images are workplace safe.]
  • LouiseLouise Epiphanies Host
    I think the issue is it is a blindingly obvious way to break the law, in a way that harms women, in a public, poorly moderated space.

    This - and research has shown that's indeed what it's being used for with the targets being overwhemingly women and girls.

    Long before AI, misogynists and sexists used to put up Page 3 of The Sun and other sexualised depictions of women to make workspaces hostile to women. They didn't even need to photoshop workmates faces onto them to use them to harass. This is a refinement and force-multiplier of the sexists' old techniques. It's a wonderful shiny new addition to their toolbox- being used to harass any woman they choose, to drive them off what - thanks to our government and other sexists- is still an important space where powerful people talk to each other and policy is influenced.

    They're not just photoshopping wanking material in the privacy of their own home, they are flooding a public space used by government and public institutions with misogyny and CSAM - and because it has a significant enough user base, they're also helping to drive callous misogyny in society at large.
  • Jane RJane R Shipmate
    And it must be nice, being able to pontificate about the finer legal points of exactly what the perpetrators are guilty of, without having to worry about whether you yourself are among their victims.
  • RuthRuth Shipmate
    Jane R wrote: »
    And it must be nice, being able to pontificate about the finer legal points of exactly what the perpetrators are guilty of, without having to worry about whether you yourself are among their victims.

    This, this, a thousand times this.
  • I think @Jane R makes an important point. That Twitter and its associated tools are enabling the production of abusive images and enabling these to be shared widely.

    They are responsible for enabling abuse. This is not a legal pronouncement, this is a moral pronouncement. The finer points of law are not really that relevant.

    The fact that another White Supremacist has provided another tools for the abuse of women is relevant. And yes, I really do feel for those who have been on the end of this abuse.
Sign In or Register to comment.