WorstPlans.com updates every Monday!

Your weekly source for terrible plans and ideas!

Category: Computers

A proposal for using large televisions as external monitors will make your laptop life easier and prevent eyestrain! And it’s a software-only solution!

Background:

Extremely large TVs have now become cheap enough to use as gigantic computer monitors. It’s possible to find a 55+” television with high enough resolution and low enough latency to work as an external monitor for even the most discerning computer-ologist.

The issue:

Most desks are not set up to accommodate a 55″ television as a monitor. In particular, the most immediately obvious arrangement—laptop in front of monitor—has the disadvantage of having a large area of the monitor blocked by the laptop (Figure 1).

Fig. 1: In this animation, we can see the red “masked out” region where the laptop screen blocks the view of the TV. This wouldn’t be a problem if the system software knew not to put windows in the red area—but since it doesn’t, the user will have to constantly rearrange their windows to avoid this “dead zone.”

Proposal:

In order to fix this laptop-blocking-screen issue, we turn to a simple software fix: simply split the monitor into three rectangular sub-monitors that are NOT blocked by the laptop screen (Figure 2).

Fig. 2: Since the system software already understands how to deal with multiple monitors, we just need to convince it that our TV is actually three separate sub-displays (screens 2, 3, and 4 here).

Fig. 3: We can see an “in-use” mockup of the multi-monitor setup here.

Instead of splitting up a monitor into three rectangular sub-displays, it might also be possible to allow a user to “mask out” an arbitrary region of a monitor as a “dead zone” to be ignored by the system (Figure 4). This would allow the external display to still be treated as a single monitor, rather than 3 separate ones. Although a non-rectangular display may seem odd, there is precedent for it in smartphones: the Apple iPhone X “notch” and the “hole punch displays” introduced in 2019 are common examples.

Fig. 4: The red outline here shows an extreme example of how a non-rectangular external monitor might be used. Perhaps if these irregularly-shaped setups become common, the weird windows of 1990s Winamp “skins” will make a triumphant return as well!

Conclusion:

Is it possible that a far-away television is better for eyestrain than a smaller-but-closer computer monitor? Maybe! Some sort of legitimate eyeball scientist should weigh in on this matter.

PROS: The multi-monitor setup would probably actually work, although irregularly-shaped displays might be a hassle.

CONS: Could have very limited appeal.

Video chat’s next major feature: physical positioning of participants (“mingle at a party” options) to allow a huge chat to be split into manageable groups!

Background:

With the 2020 COVID plague, work-related video chats have become increasingly full of a large number of participants (Figure 1).

Fig. 1: Video chat software (e.g. Zoom, FaceTime, Hangouts, Meet, Duo, Skype, and more) typically only allows participants to appear in a randomly-ordered grid. All participants are part of the same (single) discussion: there is no easy way to have a “side discussion” and then rejoin the main conversation.

The issue:

Video chats have a problem that in-person office work does not: there is no convenient way for participants of an unreasonably-large video chat group to split off into subgroups.

Instead, every discussion must take place in a SINGLE mega-discussion with all participants, or people need to leave the mega-discussion and start their own exclusive video chat groups. People often get around this by having side discussions over text, but that’s not really a great solution either.

Proposal:

In a physical workspace, it’s easy to have a small discussion: simply PHYSICALLY relocate the individuals in the conversation to an empty lunchroom table or meeting room.

To improve video chat, we simply implement the same feature: instead of each video participant just being a randomly-placed square in a grid, now each participant can also specify their location on a virtual floor plan (Figure 2).

Fig. 2: Left: the old-fashioned style of video chat. Right: the updated video chat, where you can only hear and see participants who are in close physical proximity. In this case, the chat has split into groups A, B, and C (shown here from the perspective of a person in Group B). Everyone in Group B has a normal video chat, but can only faintly hear low-audio-volume chats going on in groups A and C.

Importantly, it’s still possible to see and hear people who are somewhat nearby on the floor plan, but at a very low volume. So you can know that a conversation is going on, and join in if necessary, but it won’t drown out your primary discussion.

Previous Examples:

Some video games implement a system like this (“proximity audio”), in which you can hear voice chat only from nearby players. However, as far as I am aware, this has never been a feature in any office-focused collaboration software.

PROS: This seems like it should actually exist! Maybe it hasn’t been developed before due to the lack of compelling business case for having large numbers of people on video calls.

CONS: Might lead to a tyrannically oppressive workplace in which work-from-home employees are mandated to always be available on video chat and present on a virtual floor plan.

Never be concerned whether or not your household electronics are spying on you! This new repurposing of the “ON AIR” sign will save you from fretting!

Background:

It seems that nearly every electronic device with a camera or microphone is now Internet-enabled and can wirelessly send video and audio to the world.

The issue:

Due to the preponderance of electronic hardware in a modern household, it can be difficult which (if any) device is spying on you at that exact moment (Figure 1).

This is a relatively new phenomenon, since it used to be the case that:

  1. Cameras were relatively large
  2. Non-CIA recording devices generally needed to be physically wired to a power source and network cable.

Fig. 1: One of these devices is currently streaming video from the user’s house—but which one? Video-enabled devices sometimes have a recording light (but not always: e.g. phones, tablets), but checking these lights is still annoying and time-consuming. And audio recording generally has no indication whatsoever!

Proposal:

The classic solution to the “are we recording right now?” question is a lit-up “ON AIR” sign [see examples] that can light up whenever a TV station is broadcasting.

This same concept can be applied to modern devices: a person would buy a new piece of “ON AIR” hardware (this would essentially just be a WiFi-enabled screen). This ON AIR sign would connect to the household WiFi network light up any time it detected video being sent out to the Internet.

Detecting that streaming is happening could occur in two ways:

1) Network traffic analysis can generally identify data as “this is a stream of video / audio.” This is a solution that would probably work in most cases.

2) Each video/audio-enabled device can talk to the ON AIR sign over WiFi and notify it that streaming is occurring. This would be on the “honor system”: well-behaved software would periodically report that it was streaming. One benefit of this opt-in method is that streaming devices could send additional metadata: e.g., instead of just “ON AIR (Some computer is sending video),” the user would see “ON AIR (Joe’s PowerBook G4, streaming video over RealPlayer for 4:34)”.


Fig. 2: Thanks to this lit-up “ON AIR” sign, the user knows that there is some device recording them, and exactly which device is responsible (in this case, the “smart television”).

Of course, neither of these methods is a 100% guarantee of detecting live video being streamed: for example, a phone that was using its cellular data to stream would not be detected.

Conclusion:

This could probably be a legitimate product!

PROS: Would be a good value-add option for a router manufacturer. “This router will light up if it detects outgoing video/audio!”

CONS: Might cause the user to become extremely paranoid upon realizing that their watch, tablet, computer, phone, external monitor, fitness tracker, headphones, and dozens of other devices could all be surreptitiously spying at any time.

Do any programmers work at your company? Give them the ultimate retirement gift—save all code contributions (e.g. `git` commits) and have them published as a leather bound book!

Background:

Occasionally, people get a gift or memento from a company after working there for a certain period of time, or, sometimes, when their jobs are outsourced to a much cheaper country and everyone is fired.

Proposal:

For programmers, what better way to commemorate their contributions to a company than a log of all their code contributions?

Specifically, the proposal is to collate all of the log messages into a giant bookshelf-worthy tome.

Here, I’m using git as an example (Figure 1), but any version control system with annotation could work (e.g. user comments in Microsoft Word’s “Track Changes”).

1-git-historical-record.png

Fig. 1: Each time user “jsmith44” changed code in a codebase, a line like the ones above was generated. The comments in red are what we’ll be including in the published book. Note that only comments are included—not the actual source code.

All of a user’s contributions to a codebase can be collected by running a simple command (e.g. git publish_book –user=jsmith44 –start 2014 –end 2018). This would generate the raw PDF / ePub / Microsoft Word document that would then be sent off to a print-on-demand printing company to generate a physical book (Figure 2).

2-git-book

Fig. 2: After the code contributions in Figure 1 are printed out, we would end up with a book like this one. For users with particularly extensive “commit” messages, a multi-volume series could be generated.

 

PROS: Makes for a great retirement gift!

CONS: Reading it could cause existential dread, especially if the code was contributed toward an ultimately-failed project.

Use your sense of SMELL to diagnose computer errors: the new “smell checker” spell checker is a revolution in error notification!

Background:

In programming, there is the notion of “code smell”—a subtle indication that something is terribly wrong in a piece of source code, but without any (obvious) actual mistake.

For example, if you saw the following:

print("E");
print("RR");
print("OR");
print("!");

instead of

print("ERROR!");

that would be a good indication that something extremely bizarre was going on in a codebase.

The issue:

Unfortunately, in order to notice “code smell,” a person must actively review the source code in question.

Proposal:

But what if code smell could ACTUALLY generate a strange or horrible smell (Figure 1)? Then a person wouldn’t have to actively look for problems—the horrible smell of rotting meat would indicate that there was a problem in the codebase.

This smell-based notification method wouldn’t need to be restricted to programming errors, either: spell checking notifications, software updates, and other information could all be conveyed by smell.

 

1-code-smell.png

Fig. 1: This bizarrely-formatted source code might cause the laptop to emit a boiled-cabbage smell.

Details:

  • A computer could have an incense-burner-like attachment that would allow it to emit various smells.
  • For example, a spellchecking warning could emit the smell of recently-touched copper coins (Figure 2), while “you have 100 unread emails” could emit the smell of curdled milk.
  • This would allow a user to know what items require attention on their computer without even having to turn on the screen!
  • This smell-dispensing attachment could be refilled just like printer ink, making it extremely eco-friendly.

2-smell-check-spell-check.png

Fig. 2: Different warnings and errors could have different smells of various degrees of noticeability and/or unpleasantness. Here, the user might know that they have both a spelling error AND a grammar error by the mix of the spelling-smell (dog that has spent one hour in the rain) and grammar-smell (recently-touched pennies).

PROS: Allows computer errors to be conveyed without requiring the user to actively look at a screen.

CONS: People get used to strange smells fairly quickly, so these smell-based warnings would need to be addressed quickly, before the user adjusted to the smell and stopped noticing it.

Throw away your laptop privacy screen and use this camera-plus-software approach for the ultimate in security!

Background:

Laptop privacy screens (or “monitor filters”) reduce the viewing angle of a laptop screen in order to prevent evildoers from snooping on sensitive information on your laptop (Figure 1).

1-privacy-invader

Fig. 1: Since this laptop does NOT have a privacy screen on it, the suspicious individual at left is able to view this contents of the laptop (despite being at an extreme off-center angle).

The issue:

Unfortunately, these privacy screens have a few downsides:

  1. They are inelegant to attach. Often, the attachment points block a small amount of screen real-estate.
  2. They slightly darken the screen even when viewed directly head-on
  3. When collaborating with coworkers, removing and replacing the screen is time-consuming.

Proposal:

A high-speed camera could, in combination with facial recognition and eye-tracking software, be used to determine who is looking at the screen and exactly what part of the screen they are looking at.

Then, the privacy system simply scrambles the contents of your laptop screen as soon as it notices an unauthorized individual looking at your screen (Figure 2). (When you are the only viewer, the eye tracking camera can recognize you and not scramble the screen.)

 

2-privacy-solution

Fig. 2: With the camera-based privacy filtering system, the laptop instantly scrambles the screen as soon as it detects that someone besides the laptop owner is looking at the screen. Note that the contents of the laptop look similar at a glance, but are actually scrambled nonsense. This prevents passers-by from immediately realizing that a software privacy filter has been applied (and potentially attracting unwanted attention).

In an extra-fancy system, the scrambling mode could be operational at all times, with the laptop only unscrambling the very specific part of the screen that the user is looking at (Figure 3). This is similar to the idea of foveated rendering, where additional computational resources are directed toward the part of the screen that the user is actually looking at.

3-bonus-smart-blur-for-just-the-owner

Fig. 3: It might be possible to selectively unscramble only the part of the screen that the user is actively looking at. The region in the user’s peripheral vision would remain scrambled.

Conclusion:

If you own a laptop manufacturing company and are looking for an endless hardware task to employ your cousin or something, this would be a great project!

PROS: The laws of physics do not prevent this from working!

CONS: Might be impossible to use a laptop in a coffeeshop with this system activated.

Finally, a revolution in user interfaces: move BEYOND the keyboard for numeric input! You can easily type numbers on your phone using this one never-before-seen UI / UX paradigm. Free yourself from the tyranny of the keyboard!

When using a computer, phone, or tablet, it is occasionally the case that a user must type in numbers.

Typing numbers on a computer with a 12-digit physical numeric keypad is fast and easy (Figure 1). Unfortunately, laptops frequently no longer have these hardware keypads, and smartphones and tablets never did.

The issue:

The “soft” keypad on most phones provides no tactile feedback and is often a completely separate part of the onscreen keyboard interface (i.e. you may end up in a completely different “numeric input” mode instead of the standard alphabetical layout you are familiar with).

This may lead to the user inputting incorrect numbers or, at minimum, taking longer than is necessary to input their data.

 

1-tablet-normal-numpad

Fig. 1: The numeric keypad (A.K.A. “numpad”) shown on this smartphone is not easy to interact with. It would be easy to input the wrong number and have your pizza delivered to the wrong house (or some similar calamity).

Proposal:

Fortunately, modern smartphones and tablets have a number of additional sensors that we can repurpose for fast and unambiguous numeric input.

Below: see Proposal T (“Tilt sensor”) in Figure 2 and Proposal M (“Magnetic compass”) in Figure 3.

 

 

 

2-tilt-input.png

Fig. 2: Proposal T (“Tilt sensor”): in order to input a number, the user simply tilts their phone to a specific angle and holds it there for, say, one second. The value entered is the number of degrees the user tilted the phone (from –90º to +90º). For single-digit inputs, we could make the process simpler and map the range from –45º to +45º to 0 to 9, as shown above.

 

3-compass-input.png

Fig. 3: Proposal M (“Magnetic compass”): here, the phone’s magnetic compass is used in order to determine the user’s compass orientation (a number between 0 and 359). The user simply physically rotates themselves (and their phone) to point in the direction of the desired numeric input. In the example above, we have divided the orientation value by 10 in order to reduce the degree of precision demanded from the user (as shown on the left side, an orientation of 270º results in the input “27,” as would 271º, 272º, etc…).

Additional Input Methods:

There are alternative input methods that may also be useful for numeric input. For example, to input the number N, the user could:

  1. Raise their phone N inches into the air
  2. Quickly cover up their phone’s camera N times
  3. Shriek at their phone at (50 + 5*N) decibels. This would be faster than relying on normal voice input, since it would not require complicated machine learning techniques to process.

There may be additional yet-undiscovered methods as well!

PROS: Frees users from the technological dead-end of the hardware keyboard. Finally, innovation in the user input space!

CONS: None.

Re-visit the past with a new “old monitor nostalgia” mode for your expensive high-resolution television or computer display!

The issue:

Modern computers (and TVs) have large, high-resolution screens.

But sometimes people have nostalgia for the past—perhaps yearning for Cold War-era computing, when the harsh glow of a 9-inch CRT monitor represented the pinnacle of technology (Figure 1).

3-original-mac.png

Fig. 1: This 1984 black-and-white Macintosh cost approximately $5500 in 2019 dollars, which will buy approximately 10 economy-priced laptops in the year 2019.

Proposal:

Modern monitors should have an option to emulate the behavior of various old display types.

For example, a high-resolution monitor could easily pretend to be the following:

  • A 1950s tube television
  • The tiny black-and-white screen of the 1984 Macintosh (Figure 2)
  • The monochromatic green display of the Apple //  (Figure 3)

 

1-mac-style-simulation.png

Fig. 2: In “Mac ’84 mode,” only a tiny fraction of the screen is used (left), in order to give the user that authentic 9-inch-screen experience. (The blue area represents an unusable border region.)

 

1b-green-apple-ii-style-simulation.png

Fig. 3: Apple // mode. After a while, you actually stop noticing that the whole display is green!

Conclusion:

Now that a “Dark Mode” theme has been implemented by nearly every operating system vendor, the next arms race is sure to be “retro display mode” or “retro CRT filter” mode.

PROS: Gives people a greater appreciation of modern technology.

CONS: May cause eyestrain.

 

2-color-monitor

Supplemental Fig. S1: The actual number of pixels on a 2018 27″ iMac is 5120×2880 (14,745,600), as compared to 512×342 (175,104) on the original Mac. That’s 84.2 times more pixels, or 252 times more pixels if you count the R, G, B channels separately!

Don’t let a modern user interface coddle you with easy-to-identify-buttons—demand a confusing and unlabeled mystery zone of wonders!

Background:

It is often recommended that pet owners buy “challenging” toys to keep their pets mentally stimulated in a world where the owners take care of all the pet’s needs.

Although an owner could simply put a dog biscuit in a bowl, it would be more exciting for the dog if the biscuit were inside a difficult-to-open ball that required the dog to work to figure it out.

The issue:

Similarly, modern automation has removed many elements of daily life that were once mentally challenging. For example, turn-by-turn directions make it theoretically possible for a person to go through life without ever learning how to read a map.

Proposed idea, which has already been implemented:

A long time ago, any user interface elements on a computer were clearly marked: a button would have a thick border around it, a link would be underlined in blue, etc.

Unfortunately, this sort of coddling may cause the human species to become helpless and incapable.

What is needed is an unforgiving type of interface that does not clearly label elements that accept user input: this will force humans to become better at remembering things.

A case study is available in Figure 1. Can you figure out what is, and is not, an interactable UI element?

Android Guess The Button 1.png

Fig. 1: In order to prevent the user’s brain from atrophying due to lack of use, Google has developed a settings screen for Android that has no visual indication of what is and is not a button. Try puzzling through it yourself: can you guess what tapping on each element would do? Answers in Figure 2. This screenshot is from Android 9, but the situation is identical in Android 10 (2019).

 

Android Guess The Button 2_answers.png

Fig. 2: Answers: BLUE is a normal app button and GREEN is a user-interface-related button. The two red rectangles indicate “buttons” that highlight when clicked, but do nothing otherwise (it is theoretically possible that they do something on other phones).

Google shouldn’t get all there credit here, though: the idea of making a complex swiping-puzzle-based interface was arguably pioneered by Apple. If you don’t believe it, find someone with an iPad and ask them to activate the multiple-apps-on-the-same-screen mode: you’ll be amazed by the quality and difficulty of this puzzle!

Conclusion:

With the addition of unlabeled user interface elements and a huge array of “swipe” gestures, modern phones—both iPhones and Android phones—are adding a new category of exciting brain-challenging puzzles to everyday life.

PROS: It is theoretically possible that a user who plays these memory games with their phone will become better at crucial memorization and concentration-based tasks (although there is zero evidence of this, but it seems intuitively appealing, which is good enough here).

CONS: None!

Check your server logs for incredible deals, thanks to this new system for putting advertisements everywhere!

Background:

Some widely-used computer programs are free, and are supported exclusively as hobby projects by unpaid developers.

The issue:

Unfortunately, there is no financial mechanism to encourage further development and enhancement of these programs. Even if a hundred million people depend on a program, there is no simple way for them to support the developer.

It would be possible for software developers to figure out some sort of monetization scheme, but this requires a different skillset from software development. Plus, many programmers aren’t interested in also dealing with marketing.

Proposal:

Nearly all programs—both on servers and on regular desktop machines—write messages to a system log somewhere on the computer.

Developers of these un-monetized free utilities could sell out ad space in the logs: instead of a program just writing important data to the log (“USB hard drive failed to respond” or “bluetooth device unexpected disconnected”), the program could also pollute the log files with various advertisements (see Figure 1).

1-server-log-ads.png

Fig. 1: You might say that polluting the server logs with ads was unethical, but wouldn’t it be MORE unethical to block these ads, thus robbing the content creators of their revenue?

Conclusion:

While this is, in many ways, essentially the same idea as having ads in terminal commands (as described earlier), having ads in the logs means that they will be picked up by any monitoring utility and have a chance of being seen even if a server is not used interactively. Plus, these ads will work on servers without graphical interfaces.

Although an “on call” employee might be annoyed to get woken up at 4:00 AM by an error message from an ad, surely they wouldn’t object to it as much as long as the ad was something beneficial, like “FATAL SYSTEM ERROR: SHRIMP PLATTERS ARE 25% OFF THIS WEEK ONLY WITH CODE [SERVERSHRIMP].”

Ethics of Blocking These Ads:

One might say, “hey, I could just run ANOTHER script to purge the logs of these ads.” But really, wouldn’t that be just as unethical as blocking ads on a web site (see Figure 2), or skipping ads on a recorded program? Yes, yes it would.

 

Fig. 2: Left: this is what someone sees WITHOUT an ad blocker. Right: WITH an ad blocker. Don’t steal bread from developers by blocking annoying ads—it’s your duty as a consumer to endure these ads without complaining.

PROS: Helps encourage development and refinement of formerly-free-and-unencumbered software.

CONS: The ads may consume a few additional kilobytes per day in log files.