Thanks to visit codestin.com
Credit goes to programming.dev

Off-and-on trying out an account over at @[email protected] due to scraping bots bogging down lemmy.today to the point of near-unusability.

  • 571 Posts
  • 10.3K Comments
Joined 2 years ago
Codestin Search App
Cake day: October 4th, 2023

Codestin Search App
  • Edit: The viewing angle is 57 degrees. In comparison most VR headset have a viewing angle of 90-110 degrees.

    FOV shouldn’t be ignored, but having a large FOV also isn’t necessarily desirable on a HMD. It’s important for VR, because a major point of that is filling one’s peripheral vision, creating a sense of immersion.

    But if you wanted, say, an HMD as a monitor replacement — something that I’d be interested in — it doesn’t buy all that much, because the stuff that you can see with high detail is only in a small cone in front of you. For a given pixel resolution, I’d rather have a smaller FOV on an HMD for that, because you can take advantage of fairly high angular resolutions within that narrow cone. The real bottleneck on an HMD as a monitor replacement is the limited angular resolution you get — you can’t have something that gets as sharp and crisp as existing conventional monitors.











    1. The best engineers are obsessed with solving user problems.

    Ehh. Not sure I agree. I mean, I think that there is a valid insight that it’s important to keep track of what problem you’re actually trying to solve, and that that problem needs to translate to some real world, meaningful thing for a human.

    But I also think that there are projects that are large enough that it’s entirely reasonable to be a perfectly good engineer who isn’t dealing with users much at all, where you’re getting requirements that are solid that have been done by up someone else. If you’re trying to, say, improve the speed at which Zip data decompression happens, you probably don’t need to spend a lot of time going back to the original user problems. Maybe someone needs to do so, but that doesn’t need to be the focus of every engineer.

    1. Bias towards action. Ship. You can edit a bad page, but you can’t edit a blank one.

    I think I’d go with a more specific “It’s generally better to iterate”. Get something working, keep it working, and make incremental improvements.

    There are exceptions out there, but I think that they are rare.

    1. At scale, even your bugs have users.

    With enough users, every observable behavior becomes a dependency - regardless of what you promised. Someone is scraping your API, automating your quirks, caching your bugs.

    This creates a career-level insight: you can’t treat compatibility work as “maintenance” and new features as “real work.” Compatibility is product.

    This is one thing that I think that Microsoft has erred on in a number of cases. Like, a lot of the value in Windows to a user is a consistent workflow where they can use their existing expertise. People don’t generally want their workflow changed. Even if you can slightly improve a workflow, the re-learning cost is high. And people want to change their workflow on their own schedule, not to have things change underfoot. People don’t like being forced to change their workflow.

    The fastest way to learn something better is to try teaching it.

    I don’t know if it’s the fastest, but I do think that you often really discover how embarrassingly large the gaps in your own understanding are when you teach it.

    A little kid asking “why” can be a humbling experience.


  • I’d call good and evil human concepts, along with the moral or ethical codes and ideas that define them. They aren’t going to exist in a vacuum: what is called good and what is called evil depends on the social norms of a time and place.

    Is charging interest to lend money evil? Is homosexuality evil? Is not taking on your brother’s widow as a second wife and impregnating her evil? Those are all things that would have been considered wrong by different cultures.

    So I’d say that any question about “good” or “evil” kind of requires asking “good in the eyes of whom” or “evil in the eyes of whom”.

    If you want to ask “did evil exist in the universe 4 billion years ago”, I’d probably say “we don’t have evidence that life existed in the universe 4 billion years ago, and I think that most conventional meanings of good or evil entail some kind of beings with a thought process being involved.”

    If you want to ask “were the first humans good and then become evil”, I’d probably say that depends a great deal on your moral code, but I imagine that early humans probably violated present-day social norms very substantially.

    If what you’re really working towards is something like the problem of evil:

    https://en.wikipedia.org/wiki/Problem_of_evil

    The problem of evil, also known as the problem of suffering, is the philosophical question of how to reconcile the existence of evil and suffering with an omnipotent, omnibenevolent, and omniscient God.[1][2][3][4]

    The problem of evil possibly originates from the Greek philosopher Epicurus (341–270 BCE).[38] Hume summarizes Epicurus’s version of the problem as follows:

    “Is [god] willing to prevent evil, but not able? then is he impotent. Is he able, but not willing? then is he malevolent. Is he both able and willing? whence then is evil?”[39][40]

    So if what you’re asking is “how could good give birth to evil”, I think that I’d probably say that I probably wouldn’t call humans or the universe at large “purely good” for any extended period of time in any conventional sense of the word. Maybe in some sort of very limited, narrow, technical sense, if you decide that only humans or something capable of that level of thought that can engage in actions that we’d call good and evil and the first point in time that there was a being that qualified as human the first second of their activity happened to be something that we’d call “good”, okay, but I assume that that’s not what you’re thinking of.

    I think that you had pre-existing behavior by humans at one point in the past and ethical systems that later developed which might be used to classify that behavior, and not in some consistent way.



  • Maybe check with the tenants and if it’s not theirs, look and see what’s on it.

    I can imagine two scenarios:

    • Whoever put it there was hoping to record someone else. I think that @[email protected] brings up a good point that if the battery life is as limited as Pony believes it to be, that seems like it’d seriously limit what one could hope to get. If someone did want more recording time, it’d be really easy to hook that up to a USB powerbank and let it run for far longer and I assume hide it better, which seems like it’d kinda argue against that. If OP is the landlord, maybe they were curious as to what he’d say when he was near the box if they knew that he was going to be in the house. Or maybe it was someone else who wanted to record the tenants, especially if the box is in a living area.

    • Whoever put it there was using it in the way one traditionally uses a voice recorder, to take notes hands-free. If that’s the case, I imagine that it may start with snippets from whoever put it there and forgot it. Like, say that you’re trying to figure out what the switches in the breaker box are linked to. I will admit that it seems odd that, if the battery life is so short, that someone would have left it and OP happened to find it before the battery ran out. But I don’t know why OP was looking at the box in the first place. If it was because there was some electrical problem and OP was coming out to look at it, could be a tenant trying to look into the situation themselves.

      I’d note down something like that in text, but there are people who prefer to use devices like that or voice recorder apps on phones, and if it’s voice-activated, it’s hands-free, unlike jotting down text.


  • [continued from parent]

    The decline of spam

    A number of forums ran into problems with spam at various points. Usenet particularly had problems with it. I rarely see it today, at least not in an obvious form. I think that the decline is in part due to something that many users here often complain about — the centralization of social media. When there were many small sites dedicated to, say, bass fishing or golfing or whatever, admins had limited resources. But on Facebook or Reddit or whatnot, the anti-spam resources are basically pooled at the site level. Also, the site admins have visibility into activity spanning the entire site. Instead of writing a bot to spam, say, forum system X and then hitting each of many different sites using that forum software, one has to spam many different subreddits on Reddit, say, and that’s a lot more visible to someone like the Reddit staff, who can see all of it.


  • There have been a number of major changes that I’ve noticed over my time. Some good, some bad.

    The geographic scale has increased

    Discussion has become international, global. The Internet was not used by the general population in the US in the 1980s and 1990s. Electronic forums tended to be local, on things like BBSes, in an era when local calls and long-distance calls had a pricing model that were very different. They’d more-often deal with matters of local interest, whereas that’s less-likely today. In the 2000s, uptake of Internet-based forums was still limited, even as Internet use grew.

    The average level of technology knowledge among users has decreased

    Until maybe the late 1990s or some, I’d say that personal computer ownership was somewhat-unusual. Certainly many older people didn’t own and use a personal computer. Many people who were doing so were hobbyists or worked in technology-related fields.

    I think that relative to most non-technology-specific platforms, the Threadiverse in 2026 is something of a partial throwback here, probably just because (a) there’s a bit of a technical bar to understand and get using it that acts as a filter and (b) perhaps because some people who use it are into open-source.

    Also, the level of knowledge that formed a barrier to access has come down as software has become much easier to use and less configuration required. In the 1980s, it wouldn’t have been unexpected to need to manually set Hayes modem configuration strings. On the Mac, even obtaining executable software from the Internet was a major hurdle; Macs didn’t ship with software capable of downloading a file and then converting it into an executable, so one had to bootstrap the process by obtaining, offline, software capable of doing so. Setting up a smartphone for Internet access in 2026 mostly involves turning it on and plonking in a credit card number; the user experience doesn’t involve seeing terms like “MTU”, “SLIP”, “PPP”, or “netmask”.

    The level of wealth as a proportion of surrounding society among users has dropped

    One thing that I think that a number of people don’t appreciate in the 2020s is how staggeringly much more affordable telecommunications and computing devices have become. I’d say that the personal computer era in the US really kicked off in the late 1970s. At one point, I had a Macintosh 512K, a computer released in 1983. Now, sure, that’s not the cheapest platform out there even at the time, but it was stupendously expensive by the expectations of most users today:

    https://en.wikipedia.org/wiki/Macintosh_512K

    Introductory price: US$3,195 (equivalent to $9,670 in 2024)[1]

    That’s not including any storage media, a modem, cost of a phone line, the cost of your (almost-certainly-time-limited) access to your Internet service provider. And it was a lot harder to justify Internet access at that point, given what was out there and the degree to which computer use was present in typical life.

    The DOS world was somewhat-more economical, but hardly comparable to the situation today:

    https://en.wikipedia.org/wiki/IBM_Personal_Computer

    Introductory price: US$1,565 (equivalent to $5,410 in 2024)

    Those prices will price a lot of people out.

    Today, you can get an Internet-capable device that has far more hardware that would have had to have been purchased in addition to that included for well over an order of magnitude less. That’s tremendously expanded the income range of people who have Internet access. It means that there’s a much broader range of perspectives.

    The level of education relative to society as a whole has dropped

    For a substantial amount of time, a disproportionate chunk of the people who had Internet access were university students who got access via their university; in the US, this was government-subsidized. That meant that higher education users were disproportionately represented; users on something like Usenet weren’t a very representative sample of society as a whole (even aside from any income/education correlation). The Internet is just about everyone now, not something focused on academia and engineering.

    The use of pictoral elements has increased

    It’s not uncommon to see images (or even video) taking occupying a substantial amount of eyeball space in discussions. Bandwidth limitations used to just make sticking images in-line painful at one point. But there’s also technical bars that dropped, like forum-specific inline images and then emojis entering Unicode. I might have seen the occasional emoticon in the 1990s, but that was about it.

    That being said, ASCII art is something that I rarely see now, but which was more-common in an era when many people were viewing all discussion in monospaced typefaces.

    Messages are much shorter

    Some of this has been due to technical impositions; Twitter imposed extremely short message lengths, but even so, I’d say that Usenet tended towards much longer messages, more akin to letters.

    Rise and fall of advanced markup

    The early systems that I can think of were text-based and didn’t support styling. Then an increasing number of forums started supporting things like BBCode or HTML. But I think that the modern consensus has come mostly back to text, though maybe with embedded images and Unicode, with some supporting Markdown. Lower barrier to use, and I think that in practice, a lot of styling just isn’t all that important to get points across.

    The rise of data-harvesting and profiling

    I think that many people viewed messages as more ephemeral. You could say something and it might go away when a BBS dies or the like. But with sites like archive.org and scrapers and large-scale efforts to profile, electronic forums have more of a permanence than they once did. The Trump administration demands that visitors to the US hand over social media identities, for example. I don’t know how much that weighs on discussion in society as a whole, but it certainly alters how I think about electronic discussion to some degree.

    The rise of generative-AI-generated text in posts

    Probably one of the most-recent changes. Bots aren’t new, but the ability to make extended, really human-sounding text is.

    Cadence of discussion has increased

    Many discussion forums historically didn’t have a mechanism to push notifications to a user that there was activity in a discussion. A user might find out that there was activity the next time they happen to stop by a forum and see that there’s more activity. Today, social media software on a smartphone, wherever a user is, might play an alert sound that there’s been more activity within seconds of that activity.

    Long-tail forums have become more common

    https://en.wikipedia.org/wiki/Long_tail

    The spread of Internet use and the enormous expansion of the potential userbase has made very niche forums far more viable than they historically had been. Because the pool of users is so large, even if only a very tiny percentage of that pool is interested in a given topic, the number of topics for discussion that has a viable number of users interested in it becomes much greater.

    The Threadiverse today is also something of a throwback here, as the userbase is much smaller than something like Twitter or present-day Reddit.

    The rise of deletion

    Many systems in the past didn’t support deletion of messages, or maybe only permitted administrators to do so.

    Part of the problem is that it’s not generally practical to ensure that deletion of a message actually occurs, once that message has been made visible to the broader world. And generally, it’s considered to be a bad idea in computer security to give a user the impression that they have the ability to do something if there are ways to defeat it.

    But in practice, a lot of people seem to want to have the ability to delete (or modify) messages, and I think that consensus is that there’s value there, even if it rests upon a general convention to not go digging for deleted messages.

    The rise and fall of trolling

    I’m talking about trolling in the traditional sense of the word, where someone posts a message that looks like it might be a plausibly innocent message, but contains intentional errors or just enough outrageous stuff to spur many people to respond and start a long thread or argument.

    The idea is that a user trying to do this would “troll” for fish to try to get as many bites as possible.

    Maybe this is just the forums I use, but I remember that showing up quite a bit in the forums I used in the 2000s, like Slashdot. A Usenet tactic was to cross-post — unlike the Threadiverse, responses to a crosspost would typically go to all newsgroups — to forums likely to have users who would argue with each other, like a newsgroup dedicated to Macs and one to Wndows PCs. But I’ve seen less and less of it over time. I’m not sure why. Maybe it’s just that engagement-seeking algorithms and news media have institutionalized ragebait so much that we already have so many generated arguments that it just gets lost in the noise.

    [continued in child]



  • I had one before, but it was stupid cheap, connection was frail and didn’t have good range.

    I’m not sure exactly whether this was protocol improvements or some other form of implementation improvement (antenna location?) but, yeah, I’ve found that the popular Sony WH-1000MX6 headphones have much better ability to talk to my Bluetooth transceiver at range than do a number of older earbuds and headphones I have. The range is closer to, say, a cordless phone or WiFi.

    Depending upon your use case, that may not matter; for a smartphone, more range probably doesn’t matter much. But if you’re talking to a desktop computer, it can be handy.





  • [Moving this text to a separate response, as it really deals with a separate set of issues]

    There’s also the issue not just of US law, but of international law, and I think that that’s where more of the interesting questions come up. Under treaties that the US is party to, at an international level, as the UN rules go, to engage in military conflict, other than individual defense or defense of an ally, the US should seek approval from the UNSC (which it would not get on Venezuela; Russia or China would presumably block this). The US has certainly stretched things — its legal argument that the UNSC authorized action against Saddam Hussein is very questionable, for example, but what one sees is a steady erosion of willingness to follow UN rules. Russia and the US are two of the permanent seat holders on the UNSC. Russia didn’t bother to try to get authorization to invade Ukraine (which obviously other members would veto), and I suspect that the Trump administration won’t on Venezuela.

    The five permanent UNSC seat holders are the US, China, Russia, France, and the UK. Outside of nuclear weapons, Russia’s military power has substantially declined from the Cold War era, and its economy is of limited size. China is much more militarily powerful than it once was, and today, France and the UK are substantially less militarily-capable in most regards than China and the US. Prior to Brexit, I had thought that the EU would federalize and the French and UK seat would then become an EU seat, which would do something to restore some of the degree to which seat-holders had ability to exert military force. But as things stand, the UNSC, which was crafted to include the major military powers in the world, is now substantially out-of-whack with actual military ability. If you have a legal system to avoid conflict because it reflects what would happen in an actual conflict — e.g. instead of having to fight a war because Party 2 would fight you over the matter, you just have a vote instead that would produce a comparable outcome at far less cost than fighting a war — then there is sense in participating in such a thing. I think in practice, though, the major military powers increasingly don’t care what the UNSC says, for two reasons:

    • In some cases, a permanent seat holder may use a veto to increase the political cost of a country engaging in war when it would not actually go to war against the country wanting to use military force. This degrades the stability of the system, encourages parties to disregard it. I think that this is probably the largest flaw in the system as it stands, and that may be fundamental to it.

    • Secondly, in 2026, China and the US in particular are, in most regards, much more militarily powerful than the other permanent seat members, and may simply not be willing to extend them a veto over their military activity. All countries holding a permanent seat are nuclear powers with some form of second-strike capability, which means that war with them is, at least in theory, quite risky. In practice, though, actually using nuclear weapons comes with a lot of drawbacks; they are not a terribly usable weapon. Countries might well be willing to engage in conflict even expecting strong opposition from permanent seat holders, betting that it will not rise to the use of nuclear weapons. The UK is, absent playing nuclear hardball, going to have very limited ability to militarily oppose China if China wants to conduct a conventional land invasion in Asia. Playing nuclear hardball with China is probably going to be pretty risky.

      https://en.wikipedia.org/wiki/Handover_of_Hong_Kong

      During talks with Thatcher, China planned to seize Hong Kong if the negotiations set off unrest in the colony. Thatcher later said that Deng told her bluntly that China could easily take Hong Kong by force, stating that “I could walk in and take the whole lot this afternoon”, to which she replied that “there is nothing I could do to stop you, but the eyes of the world would now know what China is like”.[35]

      In theory, the UK could veto such an action at the UNSC. In practice, China was willing to ignore whether-or-not it had UNSC approval, because it knew that the UK lacked the ability and/or will to back up that veto with military force.

    This isn’t to say that the UNSC system has always been perfect, but the less it maps to actual ability and will to use military force, the more I expect it to be viewed as irrelevant by the major powers. Trump’s action here will probably further weaken it, I think.