It seems like a more polite way of handling this in private spaces is just to ask that people take them off - just like we do when a pig farmer walks into our house with their boots on.
I get why people are creeped out by them, but we get filmed or photographed hundreds of times a day in a big city when we are in public spaces. Gatekeeping a potentially useful technology for being filmed in public -- well, everyone is _already_ filmed in public. ATM cameras, stoplight cameras, drone cameras, smartphone cameras, security cameras, doorbell cameras. You are on camera every time you step out of your house. You are on camera every time you open your work computer. Singling out cameras in eyeglasses as "creepy" is kind of worrying about a drop in the ocean. Cameras on self-driving cars. Nanny cams. Closed-circuit cameras. The things are everywhere, and they are always invasions of privacy. Why is the line the "creeper" glasses?
I'd be ok with it if we were for banning all non-consensual recordings in all spaces. But we're very much not.
And if we're not, then having a personal heads-up display that is contextual to your current surroundings or has augmented reality capability is too useful to not use (eventually). I'm bad with names, and good with faces. That use-case alone would be worth it for me, if it were available.
Great, let's regulate it! And why are glasses more offensive than cell phone cameras, or go pros, or drones? I genuinely do not understand why people don't worry about the other form factors, but draw the line at the glasses, so help me here. To be clear - I understand why people find being recorded creepy. I don't understand why the glasses form factor is creepy but random cell phone recordings that are shared on the internet all the time without the consent of the recorded people aren't.
tl;dr It's the difference between possessing a camera and actively pointing it at someone.
Think about the practical aspect of it. I have to point my phone at you to record you. It's really quite conspicuous. It's also mildly inconvenient for me so I won't be doing it the vast majority of the time.
Whereas the glasses point wherever you're looking, are expected to be recording constantly, and are expected to do things with the data involving third parties. It's the same as a VR headset except in that case the expectation is that the footage is neither sent anywhere nor even retained, merely presented live to the user as if he were looking at you (and his face is already point in your direction).
"It seems like a more polite way of handling this in private spaces is just to ask that people take them off - just like we do when a pig farmer walks into our house with their boots on."
Just FYI, they do heavily market this towards RX glasses wearers. So, you wouldn't quite be able to just as simply ask someone to take off their glasses and no longer be able to see.
I'm going to guess that someone who can afford smart glasses can afford to have another pair of unsmart glasses. What is it about the _glasses_ that people find creepier than a smartphone that can literally do even more invasive things than the current glasses technology?
I mean, I grew up with AOL AIM, Yahoo Messenger, and IRC... yet I switched every time a new tech came out with more of my friends on it. Why do we think discord will be any more sticky than Digg or Slashdot, or any of the above?
People will migrate, some will stay, and it will just be yet another noise machine they have to check in the list of snapchat, instagram, tiktok, reddit, twitter, twitch, discord, group texts, marco polo, tinder, hinge, roblox, minecraft servers, email, whatsapp and telegram, and slack/teams for work.
Kids today are alarmingly bad at technology. This is not a "kids these days" situation, this is absolutely true. They understand "tap on icon, open app, there's a feed and DMs".
I mean it, the tech illiteracy of gen Z/alpha is out of this world, I did not expect a generation that grew up with technology to be so inept, but here we are. But they grew up with a 4x4 grid of app icons, not with a PC.
I don’t think people understand the true level of tech illiteracy of Gen Z. A couple years back I did an internship with the IT guy at my high school, and the vast majority of the problems students had with the Chromebooks we used were, in no specific order:
- Not understanding that a dead battery means it won’t turn on
- Trying to use them without an internet connection
- “The screen won’t work” when trying to non-touchscreen models like a tablet
- “I can’t see my stuff” when using the guest mode rather than their login, or when they used a PC and they couldn’t see the docs icon on their desktop
That’s not even to mention the abysmal typing skills of most students, so many 15WPM hunt-and-peck typers..
There’s a mountain of issues along those lines we ran into, and it was honestly frightening to watch.
I feel like asking someone working IT about the average technical literacy of the people they work with is similar to asking an EMT about the health of an average person. Not to discredit your experience, but you should account for the fact that a lot of the people you helped were the ones who were already filtered out by their inability to fix trivial problems.
I'm not saying this issue doesn't exist. But I want to reframe it as the low bar for using tech dropping through the floor. Previously, you had to have at least somewhat of an idea for what you're doing, but nowadays most people who don't care about tech are reliant on using the "grandma school of thought" in memorizing basic patterns and relationships without having a bigger model of what's going on. This mostly affects newer generations and older people who only started using technology recently, because this strategy didn't fly in the past. But technical literacy is falling for everyone.
But the absence of the low bar doesn't mean that everyone's chasing it. In high school, I was surrounded by peers who were interested in tech, sometimes being far better than me. The average level of understanding was pretty alright. In university, lots of people did just fine. I know countless people my age who are highly skilled in computer science. We're not in the majority, but there's plenty of us. I'm tired of it always being framed as an issue stemming from some kind of unique lack of personal responsibility and low intelligence related to age, used to apply stereotypes to hundreds of millions of people. Every average user will optimize actually understanding anything out of their brain if given an opportunity, it's just that that opportunity had only appeared fairly recently.
Yeah, I work with kids and it's admittedly a bit disheartening having conversations like
> why don't you make a separate account for your sibling
> I don't know how to make an email
> but you needed an email for your account
> yeah, I just use my school email
By that time my age as a young teen I knew how to make new accounts and research what I didn't know. And I'm not sure of its my place to help them create an email without knowledge from their parents.
Correct. From my personal experience (have kids and nieces/nephew this age), and all think an app is the thing that they scroll in, and any attempt to explain the very basics on internet connectivity, servers, databases, etc, ends up in them basically experiencing blue screen moment and backing away to the safety of the endless scroll.
The most complex concept they can understand is mail/post attachment or capcut, but then this is it. 10 minutes later they will download phone flashlight app that requires Google services for app delivery.
Shocking.
I ended up with refusing to help with anything related to technology in any other way than pointing to help/manual/search engines and asking questions.
Isn't that true of python as well? I would argue that Github's decision to use markdown for formatting, more than any other, is what resulted in its widespread adoption to other use cases. The simple tool to share code ate the world.
I'm continually surprised that Microsoft hasn't completely cornered the market on LLM code generation, given their head start with copilot and ready access to source code on a scale that nobody else really has.
The last one is fairly simple to solve. Set up a microphone in any busy location where conversations are occurring. In an agentic loop, send random snippets of audio recordings for transcriptions to be converted to text. Randomly send that to an llm, appending to a conversational context. Then, also hook up a chat interface to discuss topics with the output from the llm. The random background noise and the context output in response serves as a confounding internal dialog to the conversation it is having with the user via the chat interface. It will affect the outputs in response to the user.
If it interrupts the user chain of thought with random questions about what it is hearing in the background, etc. If given tools for web search or generating an image, it might do unprompted things. Of course, this is a trick, but you could argue that any sensory input living sentient beings are also the same sort of trick, I think.
I think the conversation will derail pretty quickly, but it would be interesting to see how uncontrolled input had an impact on the chat.
I'll add to this - if you work on a software project to port an excel spreadsheet to real software that has all those properties, if the spreadsheet is sophisticated enough to warrant the process, the creators won't be able to remember enough details abut how they created it to tell you the requirements necessary to produce the software. You may do all the calculations right, and because they've always had a rounding error that they've worked around somewhere else, your software shows calculations that have driven business decisions for decades were always wrong, and the business will insist that the new software is wrong instead of owning some mistake. It's never pretty, and it always governs something extremely important.
Now, if we could give that excel file to an llm and it creates a design document that explains everything is does, then that would be a great use of an LLM.
And adding ads into the responses is _child's play_ find the ad with the most semantic similarity to the content in the context. Insert at the end of the response or every N responses with a convincing message that based on our discussion you might be interested in xyz.
For more subtle and slimier way of doing things, boost the relevance of brands and keywords, and when they are semantically similar to the most likely token, insert them into the response. Companies pay per impression.
When a guardrail blocks a response, play some political ad for a law and order candidate before delivering the rest of the message. I'm completely shocked nobody has offered free gpt use via an api supported by ad revenue yet.
I'll attempt to provide a reasonable argument for why speed of delivery is the most important thing in software development. I'll concede that I don't know if the below is true, and haven't conducted formal experiments, and have no real-world data to back up the claims, nor even define all the terms in the argument beyond generally accepted terminology. The premise of the argument therefore may be incorrect.
Trivial software is software for which
- the value of which the software solution is widely accepted and widely known in practice and
- formal verification exists and is possible to automate or
- only has a single satisfying possible implementation.
Most software is non-trivial.
There will always be:
- bugs in implementation
- missed requirements
- leaky abstractions
- incorrect features with no user or business value
- problems with integration
- problems with performance
- security problems
- complexity problems
- maintenance problems
in any non-trivial software no matter how "good" the engineer producing the code is or how "good" the code is.
These problems are surfaced and reduced to lie within acceptable operational tolerances via iterative development. It doesn't matter how formal our specifications are or how rigorous our verification procedures are if they are validated against an incorrect model of the problem we are attempting to solve with the software we write.
These problems can only be discovered through iterative acceptance testing, experimentation, and active use, maintenance, and constructive feedback on the quality of the software we write.
This means that the overall quality of any non-trivial software is dominated by the total number of quality feedback loops executed during its lifetime. The number of feedback loops during the software's lifetime are bound by the time it takes to complete a single synchchronous feedback loop. Multiple feedback loops may be executed in parallel, but Amdahl's law holds for overall delivery.
Therefore, time to delivery is the dominant factor to consider in order to produce valuable software products.
Your slower to produce, higher quality code puts a boundary on the duration of a single feedback loop iteration. The code you produce can perfectly solve the problem as you understand it within an iteration, but cannot guarantee that your understanding of the problem is not wrong. In that sense, many lower quality iterations produces better software quality as the number of iterations approaches infinity.
>> Your slower to produce, higher quality code puts a boundary on the duration of a single feedback loop iteration. The code you produce can perfectly solve the problem as you understand it within an iteration, but cannot guarantee that your understanding of the problem is not wrong. In that sense, many lower quality iterations produces better software quality as the number of iterations approaches infinity.
I'll reply just to that as it being the tldr. First of all tech debt is a thing and it's the thing that accumulates mostly thanks to fast feedback iterations. And in my experience the better the comunication, to get the implementation right, and the better the implementation and it happens that you can have solid features that you'll unlikely ever touch again, user base habit is also a thing, continuing on interating on something a user knows how to use and changing it is a bad thing. I'd also argue it's bad product/project management. But my whole original argument was why we'd need to have a greater speed in the first place, better tooling doesn't necessarily means faster output, productivity as well isn't measured as just faster output. Let me make a concrete example, if you ask an LLM X to produce a UI with some features, most of them will default to using React, why? Why can't we question the current state of web instead of continue to pile up abstractions over abstractions? Even if I ask the LLM to create a vanilla web app with HTML, why can't we have better tooling for sharing apps over the internet? The web is stagnant and instead of fixing it we're building castles over castles over it
Tech debt doesn't accrue because of fast feedback iterations. Tech debt accrues because it isn't paid down or is unrecognized during review. And like all working code, addressing it has a cost in terms of effort and verification. When the cost is too great, nobody is willing to pay it. So it accrues.
There aren't many features that you'll never touch again. There are some, but they usually don't really reach that stage before they are retired. Things like curl, emacs, and ethernet adapters still exist and are still under active development after existing for decades. Sure, maybe the one driver for an ethernet adapter that is no longer manufactured isn't very active, but adding support for os upgrades still requires maintenance. New protocols, encryption libraries and security patches have to be added to curl. emacs has to be specially maintained for the latest OSX and windows versions. Maintenance occurs in most living features.
Tools exist to produce extra productivity. Compilers are a tool so that we don't have to write assembly. High-level interpreted languages are a tool so we don't have to write ports for every system. Tools themselves are abstractions.
Software is abstractions all the way down. Everything is a stack on everything else. Including, even, the hardware. Many are old, tried and true abstractions, but there are dozens of layers between the text editor we enter our code into and the hardware that executes it. Most of the time we accept this, unless one of the layers break. Most of the time they don't, but that is the result of decades of management and maintenance, and efforts sometimes measured in huge numbers of working hours by dozens of people.
A person can write a rudimentary web browser. A person cannot write chrome with all its features today. The effort to do so would be too great to finish. In addition, if finished, it would provide little value to the market, because the original chrome would still exist and have gained new features and maintenance patches that improve its behavior from the divergent clone the hypothetical engineer created.
LLMs output react because react dominates their training data. You have to reject their plan and force them to choose your preferred architecture when they attempt to generate what you ask, but in a different way.
We can have better tooling for sharing apps than the web. First, it needs to be built. This takes effort, iteration, and time.
Second, it needs to be marketed and gain adoption. At one time, Netflix and the <blink> tag it implented dominated the web. Now it is a historical footnote.Massive migrations and adoptions happen.
Build the world you want to work in. And use the tools you think make you more productive. Measure those against new tools that come along, and adopt the ones that are better. That's all you can do.
Oh for sure, I'm not too stressed by it -- but I think the ship has sailed on the chance for mainstream Scala adoption. Perhaps it was always delusional, but there was a period when it really seemed like Scala had somewhat of a chance to be the Ruby replacement and become one of the main backend languages (after the Twitter rewrite to Scala, when Foursquare, Meetup and various other startups of that generation were all in on the language); then there was a generation where at least it was the defacto language for data infra. Now, I'm not even sure if many major companies are even using it for the latter case.
Mainstream adoption isn't everything and I still mostly use Scala for personal projects, but it's such a different world working in a language where the major open source projects have industry backing. The Scala community, meanwhile, seems mostly stuck starting entirely new FP frameworks every other week. Nothing against that, but I don't see that much advantage to choosing Scala over OCaml at this point (if you don't need JVM integration).
Momentum appears to be behind Rust now, of course, but I've yet to be convinced. If it had a better GPU story and could replace C/C++ entirely I'd be on board, but otherwise I want my everyday language to be a bit closer to Python/Ruby on the scale against C/C++.
I get why people are creeped out by them, but we get filmed or photographed hundreds of times a day in a big city when we are in public spaces. Gatekeeping a potentially useful technology for being filmed in public -- well, everyone is _already_ filmed in public. ATM cameras, stoplight cameras, drone cameras, smartphone cameras, security cameras, doorbell cameras. You are on camera every time you step out of your house. You are on camera every time you open your work computer. Singling out cameras in eyeglasses as "creepy" is kind of worrying about a drop in the ocean. Cameras on self-driving cars. Nanny cams. Closed-circuit cameras. The things are everywhere, and they are always invasions of privacy. Why is the line the "creeper" glasses?
I'd be ok with it if we were for banning all non-consensual recordings in all spaces. But we're very much not.
And if we're not, then having a personal heads-up display that is contextual to your current surroundings or has augmented reality capability is too useful to not use (eventually). I'm bad with names, and good with faces. That use-case alone would be worth it for me, if it were available.