duncesplayed 3 months ago • 100%
LocalSend. It’s exactly like Apple Airdrop
This may be super-nitpicky (and I lose LocalSend and use it a lot), but there is one difference between LocalSend and Airdrop. LocalSend requires network connectivity (and requires the devices to be on the same network), whereas Airdrop can work without any network connection (using Bluetooth).
duncesplayed 3 months ago • 81%
I recently discovered that he believes it's theft if you watch one of his videos with an adblocker. Just out of spite, sometimes I put one of his videos on in the background (muted) with an adblocker.
duncesplayed 4 months ago • 100%
Something something broken arms
Edit: Wow, thank you for the gold, kind stranger!
duncesplayed 4 months ago • 100%
To be honest I'm more concerned by language-humor
.
Like not even saying what kind of humour, just any type of humour at all.
Jokes are for adults only!
duncesplayed 4 months ago • 100%
"But you already have a queen on the board"
"Have you heard of a sex act called 'the ladder mate'? You're the bottom removed"
duncesplayed 4 months ago • 50%
WWII sent a very clear message. You can annex Austria. You can invade Czechoslovakia. You can take over Lithunia. But you don't fuck with Poland
Well, I mean, you can fuck with Poland a little bit. You just can't take over, like, too much of Poland.
duncesplayed 4 months ago • 100%
Some extra info about Sierra's game engines....
AGI was indeed first used in KQ1, though earlier Sierra adventure games (even going back to Mystery House in 1980) used something extremely similar. AGI was just formalizing what they'd done before and setting it as a common platform for all future games.
In those days, it was, of course, not possible to write an entire adventure game in machine code because there wasn't even memory to hold more than a handful of screens. The use of bytecode was as much a compression scheme as anything else. So AGI was just a bytecode interpreter. Vector graphics primitives (e.g., draw line, flood fill) could be written in just a few bytes, much better than machine code.
Ken Williams made a splash with early Sierra games because he had an extremely simple insight that most others at the time didn't seem to have: for graphics operations, allow points to be on even-numbered x coordinates only. Most platforms had a horizontal resolution of 320, too much for 1 byte. Ken Williams had his early game engines divide every x coordinate by 2 so that it could fit into a single bit (essentially getting only 160 horizontal pixels). A silly trick, but it got big memory savings, and allowed him to pack more graphics into RAM than many other people could at the time.
After AGI (KQ3 was the last King's Quest to use AGI), Sierra switched over to their new game engine/bytecode interpreter: SCI. SCI was rolled out in two stages, though.
SCI0 (e.g., KQ4) was 16 colours and still revolved around the text parser. SCI1 (e.g., KQ5) was 256 colours and was point-and-click. (SCI2 and later were full multimedia)
For the game player, the major differences you'll notice between AGI and SCI0 (both 16 colours, both text-based) are that SCI0 renders using dithering, gets full horizontal precision (x coordinates stored in 2 bytes), multiple fonts, support for real sound devices (MT32, Adlib). For the programmer, though, AGI and SCI0 were pretty radically different. SCI0 as a programming language was an object-oriented vaguely Scheme-inspired sort of language, and was actually pretty radically different from AGI.
duncesplayed 4 months ago • 100%
Yeah during the reddit exodus, people were recommending to overwrite your comment with garbage before deleting it. This (probably) forces them to restore your comment from backup. But realistically they were always going to harvest the comments stored in backup anyway, so I don't think it caused them any more work.
If anything, this probably just makes reddit's/SO's partnership more valuable because your comments are now exclusive to reddit's/SO's backend, and other companies can't scrape it.
duncesplayed 5 months ago • 100%
According to here, Vermont and Utah do not have any titled players. At least Oregon has a FM.
duncesplayed 5 months ago • 88%
Why the quotes?
If you ever see quotation marks in a headline, it simply means they're attributing the word/phrase to a particular source. In this case, they're saying that the word "security" was used verbatim in the intranet document. Scare quotes are never used in journalism, so they're not implying anything by putting the word in quotation marks. They're simply saying that they're not paraphrasing.
duncesplayed 5 months ago • 100%
The article mentions they'll continue making the eZ80. If you're in the middle of making a PCB around the Z80, you'll just have to change the pins, I guess.
duncesplayed 5 months ago • 100%
Heads up for anyone (like me) who isn't already familiar with SimpleX, unfortunately its name makes it impossible to search for unless you already know what it is. I was only able to track it down after a couple frustrating minutes after I added "linux" into the search on a lark.
duncesplayed 5 months ago • 100%
If you pump out enough research papers, maybe Microsoft won't move you over to the Office team.
duncesplayed 6 months ago • 100%
Reminds me a little of the old Jonathan Shapiro research OSes (Coyotos, EROS, CapROS), though toned down a little bit. The EROS family was about eliminating the filesystem entirely at the OS level since you can simulate files with capabilities anyway. Serenum seems to be toning that down a little and effectively having file- or directory-level capabilities, which I think is sensible if you're going to have a capability-based OS, since they end up being a bit more user-visible as an OS.
He's got the same problem every research OS has: zero software. He's probably smart to ditch the idea of hardware entirely and just fix on one hardware platform.
I wish him luck selling his computer systems, but I doubt he's going to do very well. What would a customer do with one of these? Edit files? And then...edit them again? I guess you can show off how inconvenient it is to edit things due to its security.
I just mean it's a bit optimistic to try and fund this by selling it. I understand he doesn't have a research grant, but it's clearly just a research OS.
duncesplayed 6 months ago • 100%
To be fair, it's the newest rule change, so some older players may think it some new-fangled whipper snapper thing. We've only had about 150 years to get used to it.
duncesplayed 6 months ago • 98%
You just don't appreciate how prestigious it is to get a degree from Example U.
duncesplayed 6 months ago • 100%
I feel like the answer is recycling deposits somehow. I've seen attempts at them here and there, but I guess we haven't quite figured out the details yet. I guess electronics are a bit trickier to set up a deposit system for than pop cans. Even the places that do have electronics deposits, often you have to drive to a special recycling centre out past the airport that's open 3 hours in the middle of the day, only for them to tell you that everything's glued together so they can't really separate out the parts they need and most of it will probably end up just going to the landfill anyway.
But theoretically, if we could get a serious deposit system that allowed for recycling to be profitable and gave manufacturers and incentive for making their stuff easier to take apart and recycling (and hence easier to repair), that would be pretty sweet.
duncesplayed 6 months ago • 100%
I'm guessing childless adults are significantly less than that. Just thinking about my kids and all of their book readers, barking animal toys, light-up fairy wands, I have a bad feeling they may be bringing up that average.
Though the nice thing about kids' electronics is they never get obsoleted. A light-up fairy wand is just as fun in 2074 as it is in 2024. So they just get cycled through the 2nd hand mommy communities until they break. It was $40 new, you buy it "mostly undamaged" for $20, hope your kid doesn't scratch it too badly so you can sell it a couple years down the line for $10 or so.
The bad thing about kids' electronics is it's that for new stuff, it's really impossible to tell how long it's going to last. Could be 20 years, could be 20 minutes.
duncesplayed 6 months ago • 100%
Sure! We can insure that for you! Oh we just noticed that our InsureLink service isn't connecting to your car. So I'll just need you to sign this waiver saying that you're declining the InsureLink Safety discount. Just sign right here. It's just saying that we cannot offer you all of our insurance services, just like if you get in an accident or something and we can't remotely verify what you were doing at the time, we can't help you. Great! And without the Safety discount your premiums will go up by only 372.50 a month.
duncesplayed 6 months ago • 100%
The threat resides in the chips’ data memory-dependent prefetcher
Well that sounds extremely familiar. Nice to see the spirit of Spectre is still living on. The holy grail of speculation without any timing attack leaks is still eluding us, I guess.
duncesplayed 6 months ago • 100%
I was saying Boo-urns.
duncesplayed 6 months ago • 100%
The end game of chess is social alienation and alcoholism. The only winning move is not to play. Everything else is a blunder.
duncesplayed 6 months ago • 100%
I play chess960, so I just keep aborting games until I get a board where f3 makes sense.
duncesplayed 6 months ago • 100%
Let's do the CBA.
Keep playing:
- Gain playing-from-a-losing-position XP
- Gain end-game XP
- Gain playing-without-a-queen XP
- Allow your opponent the satisfaction of a mate
- Bestow honour onto the name of your family
Resign:
- Save 1 minute of your time
- Feel like a stupid pansy removed
Tough choice.
duncesplayed 7 months ago • 100%
It is, but it probably shouldn't be any more. WebP has good support everywhere now and is slightly better than JPEG and PNG combined. (Better lossy compression than JPEG, plus transparency support, and better lossless compression than PNG). But even WebP is considered lame these days compared to the new crop.
E.g., JXL (JPEG XL) is much better WebP and is supported by everyone except Google (which is ironic since Google helped create it). Google seems to want AVIF to be the winner for the new image format, but not many others do.
Anyway, until the Google JXL AVIF hissy fit is dealt with, at least we've still got WebP. It's not super great, but it's at least better than JPEG and PNG. A lot of web developers are stuck in their old JPEG PNG mindset and are being slow to adapt, so JPEG is still hanging around.
duncesplayed 7 months ago • 100%
aborting everytime you are black
duncesplayed 7 months ago • 100%
Here's another reason you should never resign: endgames are crazy hard, and not resigning is the only chance you'll ever get to practice them.
duncesplayed 7 months ago • 100%
I feel like this should be required reading for a lot of Linux users. That article is a couple years old now, but I think is even more true now than it was when it was written. Having a middleman (package maintainer) between the user and the software developer is a tremendous benefit. Maintainers enforce quality, and if you bypass them, you're going to end up with Linux as the Google Play Store (doubly so if you try and fool yourself into thinking it won't happen because "Linux is different")
duncesplayed 7 months ago • 100%
duncesplayed 7 months ago • 100%
Linux is the only platform to get native WebGL, too!
duncesplayed 7 months ago • 95%
It's in Proverbs 11:20
The C++ developers are an abomination to the Lord,
But the Rustaceans in their Rust-based OSes are His delight.
duncesplayed 7 months ago • 100%
Totally agreed. I never used Twitter. I tried in earnest to use Mastodon for a couple years, because I wanted it to to succeed, just kind of ideologically.
Eventually I realized that the whole concept of "microblogging" is just fundamentally awful. (At least for me)
duncesplayed 7 months ago • 100%
It's true. And people try to jump on to similar things. "It's just like how email works!", or "It's just like how international phone calls work!"
Yeah, nobody has any clue how those two things work, either.
duncesplayed 8 months ago • 100%
The search term is censored by DuckDuckGo in Korea. Even robots apparently think it's going to be an IoT buttplug.
duncesplayed 8 months ago • 100%
That's Saturday night in North American time zones. Just a heads up in case you're planning a boys' night out a couple hundred billion years in advance, maybe move it to Friday night in case the world ends Saturday night.
duncesplayed 8 months ago • 100%
Oh yeah, totally. I, too, have solved chess. Haven't we all? I totally get what you're talking about.
duncesplayed 8 months ago • 100%
Have you been following any of the court battles involving LLMs lately?
The New York Times suing OpenAI. Getty Images suing Stability AI. Sarah Silverman and George R.R. Martin suing OpenAI.
All of those cases involve data that has been scraped. (In the latter two cases, the memoir/novels were scraped from excerpts and archives found online).
It's too late to say with complete certainty that it's all legal (the appeal processes haven't all been finished yet), but at this point it looks like using scraped and copyrighted data in training LLMs is legal. Even if it's going to turn out not to be legal, it's very clear that nobody's shying away from doing it, because we have the courts showing as a statement of fact that it's been happening for years.
Everything you've written is just fantasy. We have a lot of reality which contradicts it. Every LLM company has been primarily relying upon scraping data (which we know to completely legal) and has been incorporated copyrighted and scraped data in its data sets (which is still legally a grey area, but is happening anyway).
duncesplayed 8 months ago • 100%
Out of curiosity, did you use it as a daily driver? A friend of mine tried it out briefly, and it was pretty cool, but the lack of applications meant we couldn't really do anything with it (other than marvel at how cool it was). Did it eventually get applications developed for them? Like did they have an office suite?
duncesplayed 8 months ago • 100%
Has reddit not already been scraped? With all of that information exposed bare on the public Internet for decades, and apparently so valuable, I find it hard to believe that everybody's just been sitting there twiddling their thumbs, saying "boy I sure hope they decide to sell us that data one day so that we don't have to force an intern to scrape it for us".
duncesplayed 8 months ago • 100%
Let's not rule out Æ
I'm a university professor and I often found myself getting stressed/anxious/overwhelmed by email at certain times (especially end-of-semester/final grades). The more emails that started to pile in, the more I would start to avoid them, which then started to snowball when people would send extra emails like "I sent you an email last week and haven't got a response yet...", which turned into a nasty feedback loop. My solution was to create 10 new email folders, called "1 day", "2 days", "3 days", "4 days", "5 days", "6 days", "7 days", "done", "never" and "TIL", which I use during stressful times of the year. Within minutes of an email coming into my inbox, I move it into one of those folders. "never" is for things that don't require any attention or action by me (mostly emails from the department about upcoming events that don't interest me). "TIL" is for things that don't require an action or have a deadline, but I know I'll be referring to a lot. Those are things like contact information, room assignments, plans for things, policy updates. The "x days" folders are for self-imposed deadlines. If I want to ensure I respond to an email within 2 days, I put it in the "2 days" folder, for example. And the "done" folder is for when I have completed dealing with an email. This even includes emails where the matter isn't resolved, but I've replied to it, so it's in the other person's court, so to speak. When they reply, it'll pop out of "done" back into the main inbox for further categorizing, so it's no problem. So during stressful, email-heavy times of year, I wake up to a small number of emails in my inbox. To avoid getting stressed, I don't even read them fully. I read *just* enough of them that I can decide if I'll respond to them (later) or not, categorize everything, and my inbox is then perfectly clean. Then I turn my attention to the "1 day" box, which probably only has about 3 or 4 emails in it. Not so overwhelming to only look at those, and once I get started, I find I can get through them pretty quickly. The thing I've noticed is that once I get over the initial dread of looking at my emails (which used to be caused by looking at a giant dozens-long list of them), going through them is pretty quick and smooth. The feeling of cleaning out my "1 day" inbox is a bit intoxicating/addictive, so then I'll *want* to peek into my "2 days" box to get a little ahead of schedule and so on. (And if I don't want to peek ahead that day, hey, no big deal) Once I'm done with my emails, I readjust them (e.g., move all the "2 days" into "1 day", then all the "3 days" into "2 days", and so on) and completely forget about them guilt-free for the rest of day. Since implementing this system a year ago, I have *never* had an email languish for more than a couple weeks, and I don't get anxiety attacks from checking email any more.
Thomas Glexiner of Linutronix (now owned by Intel) has posted 58 patches for review into the Linux kernel, but they're only the beginning! Most of the patches are just first steps at doing more major renovations into what he calls "decrapification". He says: > While working on a sane topology evaluation mechanism, which addresses the short-comings of the existing tragedy held together with duct-tape and hay-wire, I ran into the issue that quite some of this tragedy is deeply embedded in the APIC code and uses an impenetrable maze of callbacks which might or might not be correct at the point where the CPUs are registered via MPPARSE or ACPI/MADT. > So I stopped working on the topology stuff and decided to do an overhaul of the APIC code first. Cleaning up old gunk which dates back to the early SMP days, making the CPU registration halfways understandable and then going through all APIC callbacks to figure out what they actually do and whether they are required at all. There is also quite some overhead through the indirect calls and some of them are actually even pointlessly indirected twice. At some point Peter yelled static_call() at me and that's what I finally ended up implementing. He also, at one point, (half-heartedly) argues for the removal of 32-bit x86 code entirely, arguing that it would simplify APIC code and reduce the chance for introducing bugs in the future: > Talking about those museums pieces and the related historic maze, I really have to bring up the question again, whether we should finally kill support for the museum CPUs and move on. > Ideally we remove 32bit support alltogether. I know the answer... :( > But what I really want to do is to make x86 SMP only. The amount of #ifdeffery and hacks to keep the UP support alive is amazing. And we do this just for the sake that it runs on some 25+ years old hardware for absolutely zero value. It'd be not the first architecture to go SMP=y. > Yes, we "support" Alpha, PARISC, Itanic and other oddballs too, but that's completely different. They are not getting new hardware every other day and the main impact on the kernel as a whole is mostly static. They are sometimes in the way of generalizing things in the core code. Other than that their architecture code is self contained and they can tinker on it as they see fit or let it slowly bitrot like Itanic. > But x86 is (still) alive and being extended and expanded. That means that any refactoring of common infrastructure has to take the broken hardware museum into account. It's doable, but it's not pretty and of really questionable value. I wouldn't mind if there were a bunch of museum attendants actively working on it with taste, but that's obviously wishful thinking. We are even short of people with taste who work on contemporary hardware support... > While I cursed myself at some point during this work for having merged i386/x86_64 back then, I still think that it was the correct decision at that point in time and saved us a lot of trouble. It admittedly added some trouble which we would not have now, but it avoided the insanity of having to maintain two trees with different bugs and "fixes" for the very same problems. TBH quite some of the horrors which I just removed came out of the x86/64 side. The oddballs of i386 early SMP support are a horror on their own of course. > As we made that decision more than 15 years [!] ago, it's about time to make new decisions. [Linus responded to one of the patches](https://lore.kernel.org/lkml/CAHk-=wh9sDpbCPCekRr-fgWYz=9xa0_BOkEa+5vOr9Co-fNhrQ@mail.gmail.com/), saying "I'm cheering your patch series", but has obviously diplomatically not acknowledged the plea to remove 32-bit support.
Hey all technology people! Not my community, but I thought I'd advertise someone else's new lemmy community to see if anyone else is interested. Head over to !bbses@lemmy.dbzer0.com for BBSes and retrocomputing.
It feels like we have a new privacy threat that's emerged in the past few years, and this year especially. I kind of think of the privacy threats over the past few decades as happening in waves of: 1. First we were concerned about governments spying on us. The way we fought back (and continue to fight back) was through encrypted and secure protocols. 1. Then we were concerned about corporations (Big Tech) taking our data and selling it to advertisers to target us with ads, or otherwise manipulate us. This is still a hard battle being fought, but we're fighting it mostly by avoiding Big Tech ("De-Googling", switching from social media to communities, etc.). 1. Now we're in a new wave. Big Tech is now building massive GPTs (ChatGPT, Google Bard, etc.) and it's all trained on *our* data. Our reddit posts and Stack Overflow posts and maybe even our Mastodon or Lemmy posts! Unlike with #2, avoiding Big Tech doesn't help, since they can access our posts no matter where we post them. So for that third one...what do we do? Anything that's online is fair game to be used to train the new crop of GPTs. Is this a battle that you personally care a lot about, or are you okay with GPTs being trained on stuff you've provided? If you do care, do you think there's any reasonable way we can fight back? Can we poison their training data somehow?
I do enjoy watching a bit of sportsball. You know the thing where fans of red guys and fans of blue guys chirp and bantz at each other about which colour is the losers and which colour owns, and you hurl insults at the refs for making a call even if it was probably correct. So what are the biggest sports communities are on lemmy? And/or are there any instances that have more sports than others? Considering the size of lemmy, I'm not even in the mood to be picky about which sport it is. I just want to tell someone else on lemmy that their team sucks.