Podcast transcript, Mosen at Large episode 202, A comprehensive demonstration and review of the Envision Smart Glasses
Transcripts of Mosen at Large are made possible by Pneuma Solutions, a global leader in accessible cloud technologies. On the web at http://PneumaSolutions.com.
You can read the full transcript below, download the transcript in Microsoft Word format, or download the transcript as an accessible PDF file.
[music]
Jonathan Mosen: I’m Jonathan Mosen, and this is Mosen At Large, the show that’s got the blind community talking. The entire episode this week is devoted to a demonstration of the Envision smart glasses, how do they work, what do they do, how easy are they to use, and how might they fit into your life.
[music]
Introduction to the Envision Smart Glasses
Welcome to this comprehensive demonstration and review of the Envision smart glasses. Hopefully, your podcast player of choice supports chapter marks. If it does, you’ll be pleased to know that this review makes extensive use of them, allowing you to skip between major sections of the review. Each time we’ve discussed these glasses on Mosen At Large, there’s always been considerable interest. That interest has only increased markedly since Aira and Envision made their announcement some months ago that the visual interpreter service was coming to the Envision smart glasses platform.
The Envision smart glasses are a premium product that sit at a premium price point for any user, but that’s particularly the case for blind people who are far too often unemployed or in lower-paying jobs. Just what can the Envision smart glasses do for you and how well do they do it? Who are they for? If you’re highly proficient with the camera on your smartphone, are they worth it? If you struggle to get good pictures from your smartphone camera, could these glasses be the answer? Envision and their New Zealand distributor, Pacific Vision, have lent me a pair of smart glasses so I can put this demonstration and preview together for you, and I thank them both for that.
Envision has been extremely helpful answering my questions, but they have not heard this review in advance of its publication. The experiences and opinions you’ll hear are mine. If this product interests you, I’d encourage you to make further inquiries so you can make up your own mind about how they would fit into your life. We’ve had Karthik Kannan from Envision on a couple of episodes of the podcast, but in case you haven’t heard those episodes, I’ll begin by introducing the Envision smart glasses. Envision is a company based in the Netherlands, which for some years has developed an app for iOS and Android called Envision AI.
In 2020, they released the Envision smart glasses, a software suite running on the Google Glass Enterprise Edition 2 hardware. Wearing the glasses, you can read text, recognize objects, describe what’s around you, seek sighted help, and more. Since the platform’s initial release, Envision has demonstrated a commendable commitment to continuous improvement of the platform. When you receive your package, you’ll find the Envision smart glasses body and the Envision glasses frame which will need to be attached to the body. Once you’ve done that, you have a pair of glasses you can wear. On its website and when using the glasses, the documentation is well-written and comprehensive.
If you’re a self-starter, the documentation should be sufficient to show you how to assemble the glasses and get up and running. Envision also offers a free onboarding for every user where you can jump on a Google Meet or a Zoom call and receive assistance to get started. This is a very nice touch and it makes you feel like you’re a VIP customer. The body is where you find all of the electronics of the glasses. It contains the camera, which sits above your right eye when you’re wearing the glasses correctly, the processor, the built-in battery, and the speakers.
The speakers are tinny sounding, and you may have difficulty hearing them in noisy environments. As a wearer of behind-the-ear hearing aids, I was concerned about how well I’d be able to hear the speaker, but I found that the speaker rested comfortably by the microphone of my right hearing aid. I have no problem at all hearing the speech output from the glasses, particularly when I cranked up the volume. If you wear hearing aids that support Bluetooth, or you have a streamer device for hearing aids that’s Bluetooth-capable, you can pair the glasses via Bluetooth audio.
This means that you can also use any Bluetooth headphones or earbuds or even a Bluetooth speaker if you really want the world to hear your glasses. That’s actually a really good trick for demo purposes. Wired audio is also supported via the USB-C port. You can use USB-C headphones or buy an adapter that goes from USB-C to a female 3.5-millimeters audio jack. However, at the time of recording this demo, there was a bug with USB-C audio on the glasses which results in the volume being stuck at 20% and there’s no way to increase it.
Envision are investigating this, so by the time you get your glasses, should you wish to do so, hopefully, this will have been addressed.
There’s a charging cable in the box which also connects to the USB-C port. The glasses are lightweight and easy to wear for long periods. I did notice though that if you use them for long enough, the body of the glasses can get a bit warm, but not alarmingly so. When worn correctly, the camera will sit above your right eye. For a while, I found myself intuitively holding out objects and pieces of paper directly in front of me. Just like with any smartphone, for best results, it’s important to remember precisely where the camera is located and try to get the object as centered as possible.
In many of the Envision apps, you get excellent guidance on this. The glasses’ body contains a touchscreen. This will be on your right temple when worn correctly. The touchscreen is very responsive. If you’re comfortable with a touchscreen on a smartphone, you’ll feel right at home using this one. Once you’ve attached the frame to the body of the glasses, it’s necessary to pair them with your Envision account. If you don’t have one already, you’ll need to download the Envision app, which is now free, either from the iOS App Store or from Google Play, and create an account. When you’ve done this, you’ll visit the glasses tab in the Envision AI smartphone app and complete the simple pairing process.
Again, the documentation telling you how to do this is excellent and it’s a very straightforward process. You begin by pressing the power button, which is on the inside of the glasses by your right ear when wearing them. The glasses take around a minute to boot up, and you’ll hear a spoken announcement when that process is complete. Once you’re working with the glasses though, it’s like working with a smartphone. You can tap the power button to put it on standby and tap the power button again to wake it, so that’s instantaneous.
One serious weakness of the product as it currently stands is that you can only pair the glasses with one Envision account at a time.
This means that there are limitations around the extent to which a blind couple can make use of the glasses. It’s no problem to use a single account for tasks like reading text or recognizing objects, but the scan text feature allows you to export what you’ve scanned to your Envision library that appears in your Envision app on your smartphone. Perhaps many blind couples will be okay signing into the same Envision account, but then there are Aira minutes to consider. Again, perhaps not too big a deal if a blind couple is on an Aira shared minute plan, but Aira does keep information on an individual user’s profile that is personal to that user. You lose that personalization if you can’t access your own Aira account.
I really feel Envision has missed a trick here, and that the ability to set up multiple profiles on the glasses is essential. The app takes you through the process of connecting to your Wi-Fi network. Hopefully, that Wi-Fi network is secured, and so you will need to enter the password for it, as you make the connection. That is a one-off. You can connect to multiple Wi-Fi networks with the glasses, including a hotspot, and we’ll talk a bit more about that later.
Learning the gestures
I won’t take you through the pairing process because there are several preferences I’ve already customized, but let me show you the gestures tutorial that you land in once pairing is complete. Note that I’m not using the default Envision voice. Several are available and I prefer this one. I’ll show you all of the voices a little bit later.
Automated voice: The gestures tutorial will start in three seconds. Do a two-finger swipe down to exit the gestures tutorial.
[beeping]
Welcome to Envision Glasses. Let’s take a moment to learn about how to use it. If you are already familiar with using the Envision Glasses, you can do a two-finger swipe down to exit this gestures training.
Jonathan: I’ll just let it continue. There’s a pause here.
Automated voice: The primary means of interaction with the Envision Glasses is through the touchpad. This is located on the outside of the main body near your temple. It starts at the hinge where the glasses fold all the way back to your ears. Place your finger anywhere on the touchpad to see where the touchpad is located on the body of the Envision Glasses.
Jonathan: All right, I got to feel around where I believe the touchpad to be. Now, every time I touch, there is a click. That allows me to get a feel for how large the touchpad is. I’m just tapping around and I see it’s actually quite extensive.
Automated voice: Now that you know where the touchpad is located, we will go through the different gestures. First, let’s go through the gestures that require you to use one finger. Now, let’s do a one-finger double tap. A one-finger double tap is done by doing two quick taps with one finger. This gesture is used when you want to activate or enter a screen. One-finger double-tap five times.
Jonathan: All right, here we go. Tap number one [beep], double tap number two [beep], double tap number three [beep], and four [beep]. Finally, five [beep].
Automated voice: You completed five double-tap gestures. A one-finger single tap is done by tapping once with one finger on the touchpad. This gesture is used as a play or pause button for the screen reader. One finger single tap five times.
Jonathan: Sure. I’m up for the challenge.
[beeps]
Automated voice: Nice. Five one-finger single taps are detected. Now let’s do a one-finger tap and hold. A one-finger tap and hold gesture is done by placing one finger on the touchpad and leaving it on the touchpad for a second. This gesture is used to obtain more information about a certain screen or feature. One finger tap and hold five times. [beeps] Awesome, you completed five tap-and-hold gestures. A one-finger swipe forward is done by putting one finger at the back of the touchpad nearby your ear and moving it to the front while holding your finger on the touchpad. This gesture allows you to move to the next screen. One finger swipe forward five times.
[beeps]Great, you completed five swipe-forward gestures. Now let’s do a one-finger swipe back. A one-finger swipe back is done by doing the opposite of a swipe forward. Put your finger at the front of the touchpad and while holding it on the touchpad move your finger to the back towards your ear. This gesture allows you to move to the previous screen. One finger, swipe back five times. [beeps]
Awesome, you completed five back swipe gestures. Now let’s do a one-finger swipe down. A one-finger swipe down is done by starting at the top edge of the touchpad and swiping gently downward toward the ground. This gesture allows you to exit a screen or feature. One finger swipe down five times. [beeps] Great, you completed five swipe-down gestures. Those were all the gestures that require one finger. There are three more gestures that require you to use two fingers on the touchpad. Now let’s do a two-finger single tap. A two-finger single tap gesture is done by tapping once with two fingers at the same time anywhere on the touchpad.
This is used to open the context menu, two-finger single tap five times. [beeps] You completed five two-finger single tap gestures. A two-finger swipe down is done by starting at the top edge of the touchpad and swiping gently downward toward the ground with two fingers. This gesture allows you to go back to the home screen and can be used anywhere you are. Two-finger swipe down five times. [beeps]
Great, you completed five two-finger swipe down gestures. A two-finger swipe up is done by starting at the bottom edge of the touchpad and swiping gently upward towards the sky with two fingers. This opens the shortcut for the volume and can be used anywhere. Two-finger swipe up five times. [beeps] Great, you completed five two-finger swipe up gestures. That was it. We have gone through all the possible gestures that you can use on the Envision glasses. You will now enter the playground. The gestures training is a way to practice the gestures on the Envision glasses. Gestures training starts in three seconds. To exit the training do a two-finger swipe down. [beep]
Jonathan: That is a very user-friendly experience.
Automated voice: To exit the practice do a two-finger swipe down.
Jonathan: Now I can perform any gesture on the touchscreen.
Automated voice: Single tap. This gesture allows you to play, pause, or repeat the voiceover of a screen.
Jonathan: This is the thing that many of us are familiar with in many devices and pieces of software, where you get into a learning mode and you can perform a gesture and see what it does.
Automated voice: Swipe back. This gesture allows you to move to the previous screen.
Jonathan: Now I get out of this.
Automated voice: Home.
A quick tour of the Main Menu
Jonathan: We are on the home screen. Let’s just take a look at the main menu.
Automated voice: Home.
Jonathan: We’re on the home option now, and if I double-tap this, we get something a bit special.
Automated voice: It is 16:13. Today is Saturday, 1 October. Battery level is at 82%. You are connected to Menand, wi-fi 5G. Do a two-finger swipe-down gesture to set your device to sleep mode.
Jonathan: If I perform a two-finger tap-
Automated voice: Hey, your Envision glasses are currently on version 1.8.0.
Jonathan: -that tells me the version of the software that I have. At any stage, if I want more information about the item that has focus, I can perform a one-finger tap and hold. Let’s see what happens if I do that here.
Automated voice: Category one of eight. This is the main screen on your Envision glasses. Do a one-finger double tap to check the battery level, time, date, and what wi-fi you are connected to. Do a two-finger single tap to know what version of the software you are on. Finally, do a two-finger swipe down to put your Envision glasses to sleep.
Jonathan: Now I’ll swipe forward.
Automated voice: Read.
Jonathan: The next item is read that has a sub-menu and we’ll take a look at all of this later.
Automated voice: Call.
Jonathan: The call options are where you’ll find the ally feature and also Aira now.
Automated voice: Identify, find, device settings, feature preferences, help.
The Read Menu
Jonathan: That’s the main menu. There are sub-menus once you go into a lot of these items. We’ll explore those as we move through this demonstration. The read menu is obviously a critical item in the Envision smart glasses, and I’m going to look at that extensively and we will put it through its paces in various ways and just see how well it does in certain scenarios. I’ll swipe forward to read.
Automated voice: Read.
Jonathan: Let’s have a listen to the description of this menu from the glasses itself.
Automated voice: Read. Category two of eight. This category contains the features scan text, instant text, and batch scan that help you read all kinds of different text.
Jonathan: I got that, of course, by performing a tap and hold. I’ll double tap to go into the read menu.
Automated voice: Instant text.
Jonathan: Let’s listen to the description of the instant text. This is something you will be familiar with if you’ve used Envision AI on your smartphone.
Automated voice: Instant text. Read short pieces of text around you instantly. Instant text is ideal for reading short pieces of text as found in room numbers, signs, book covers, et cetera. This feature uses a video feed to detect text in front of the camera. One finger single tap to play or pause instant text. Two-finger single tap to open the context menu. In the context menu, you can toggle the settings for offline mode to use it without an internet connection and set your recognition language. Do a one-finger double tap to start using instant text.
Jonathan: Because I am only using English, I’ve set the offline mode to ‘on’ thinking that if it doesn’t have to go to the cloud it might be a little faster. I’ll swipe forward to look at other options here.
Automated voice: Scan text.
Jonathan: Scan text is next. Let’s hear the description by performing a tap and hold.
Automated voice: Scan text. Read long. Dence and complex pieces of text. Use scan text to read long letters or documents. This feature works by first taking a photo of the text. Two-finger single tap to open the context menu. In the context menu, you can toggle the settings for word detection which detects how many words are appearing in front of you and the language detection that will automatically read the text in the correct language and voice. Do a one-finger double tap to start using scan text.
Jonathan: The scan text option as a key function of the glasses we’ll be spending plenty of time with and I have to say I think it is exceptional. One thing I won’t be able to demonstrate because I just don’t have any around is that the scan text feature also recognizes handwritten text. I’ll swipe forward.
Automated voice: Batch scan.
Jonathan: We’ll hear the description of batch scan.
Automated voice: Batch scan. Read multiple pages or documents of text in one sitting. After scanning a page, one finger double tap again to scan the subsequent page. Two-finger single tap to finish scanning and have it open in the reader. Do a one-finger double tap to start using batch scan.
Reading printed mail
Jonathan: Those are the items on this read menu. To put the smart glasses through their paces, I have left the studio and I’m in the dining room at Mosen Towers. I’m recording on a lavalier microphone so I’m hands-free because one of the big benefits of these glasses is that you can do things hands-free and I wanted to give you that experience. I’ve got a lavalier microphone clipped to me. We are recording on the Zoom F3 and we’ve got audio from the glasses going directly into the Zoom F3. You can hear Bonnie in the background coughing and spluttering and she’s got a whole bunch of shopping and we are going to look at some of that shopping as we do the demo of these glasses.
What I’d like to start to do is have a look at some mail that we have on the kitchen table here and try and differentiate between when you would use instant text and when you would use the full scan feature. To set the scene, instant text’s real benefits are the fact that it’s not necessarily the most accurate option, but it is the quickest option. It’s instant. You can use this in a range of situations.
You can hold up a piece of paper, a book, something like that, to the camera and get an idea of what you’re dealing with and then go in and scan if you want to get further information or keep it in your app for later reference, but you can use it for so much more as well. I’ve heard as I’ve done my research on these glasses of people who’ve sat in a car and been driven around and, essentially, looked out the window and got information of a textual nature that’s going past signage, that kind of thing.
You can also read screens, of course, we’ve all been in that terrible situation where our computer isn’t talking and we are trying to work out what’s going on. Instant text is useful for all of those things. I’m going to go into it now and in the process, I’ll demonstrate another feature of the Envision Glasses. We’ve had a look at the touchscreen interface. There is also a voice command interface that you can use with this and you work with that by pressing the little button around the hinge of the smart glasses where the body of the glasses meets the frame. I’m going to switch the glasses on because they hibernate after a period to save battery.
Automated voice: Hold.
Jonathan: Now, I’m going to push the little button on the hinge.
[beep]
Instant text.
Automated voice: Opening instant text.
Jonathan: Simple as that. I’m going to hold an envelope up to the camera.
Automated voice: Selections on, absolutely positively. Wellington City Council, VHD coupon free [unintelligible 00:22:27] GS4. Voting documents. One City Council, The Mayor. A mayor is the leader A [unintelligible 00:22:36] district now.
Jonathan: We’ve got information there that it is the Wellington City Council and the slogan “Absolutely Positively Wellington” and that these are voting documents. These are voting documents for our local election process which is completely inaccessible because you need sighted assistance to complete the printed voting forms and send them back and that’s the only way you can vote right now. I am going to open the envelope and I’m going to take out what looks to me like the voting booklet and I’m going to see if we can read some of this voting booklet.
I’m just going to open it up at some random page. It’s quite a small booklet and so I think I should be able to get two pages to scan at once. To do that, we have to make sure that layout detection is enabled and we’ll go and have a look at this. Let’s go home.
Automated voice: Home.
Jonathan: Now I’ll swipe forward.
Automated voice: Read.
Jonathan: Double-tap read.
Automated voice: Instant text.
Jonathan: We’ll flick forward.
Automated voice: Scan text.
Jonathan: Now we’re on scan text. Before we go any further, we’ll invoke the context menu by performing a two-finger single tap.
Automated voice: Smart guidance is enabled. To disable smart guidance, do a one-finger double tap. Confirm by doing a one-finger swipe down.
Jonathan: I’ll swipe forward.
Automated voice: Layout detection is enabled. To disable layout detection, do a one-finger double tap. Confirm by doing a one-finger swipe down.
Jonathan: There may be circumstances where you want layout detection to be disabled. For example, if you’re dealing with financial data where you’ve got a bill, an itemized account of some kind, you don’t want it to try and decolonize that, but in this case, we’ve got a two-page document. We definitely want that on.
Automated voice: Language detection is disabled.
Jonathan: That’s fine because we’re only doing one language.
Automated voice: One-finger double tap to enable. One-finger swipe down, word detection is disabled. One-finger double tap to enable. One-finger swipe down to confirm your selection.
Jonathan: The best way I can think of to describe word detection is it’s a little bit like the user experience you get if you’ve ever used the voice stream scanner app. In that, the more words that the glass detector and the view, the higher the pitch. With the voice reader scanner it’s louder. With this it’s higher. We might demonstrate that later, but I’ll go back.
Automated voice: Scan text.
Jonathan: Now we’re on scan text. I’m going to double tap scan text and hold the document out so that it’s in the center of the camera view.
Automated voice: No document detected.
Jonathan: Let’s just–
Automated voice: Move document up or your head down.
Jonathan: Okay, I’ll move the document up a bit, and it took the picture. It’s as simple as that. Now, it’s going to do the recognition and because we are using the scan text feature, it will invoke its reader. It starts to speak–
Automated voice: Reader, I have been recognized with several life memberships as a Paul Harris fellow and awarded an NMZM for my community work. I have a passion to support those less well-off. For more information please see my webpage www.chrisav.nz. Services webpage, better for wrong and environ, [unintelligible 00:26:02]. I have been able to bring good governance along with a focus on results effectiveness and–
Jonathan: I’m going to perform a one-finger tap there and pause it, but as you can hear, that’s doing very well. What if I wanted to scan this whole thing? Well, this is where we would use the batch scanning feature. I’m going to go to the beginning of the book and I’m now going to go back to the previous menu.
Automated voice: One finger swipe down again to exit. Scan text. Batch scan.
Jonathan: Here’s batch scan. I’ll double tap batch scan.
Automated voice: Move document up or your head down.
Jonathan: All right, so move it up a little bit and it’s taken the picture. Now, we’ll turn the page while it’s doing that recognition.
Automated voice: First page scanned. One finger double-tap again to scan the second page. Two-finger single tap to finish scanning and open the text in the reader.
Jonathan: We’ll double-tap.
Automated voice: Move document up or your head down.
Jonathan: It’s taken that picture so I’ll flip the page.
Automated voice: Page 2 scanned. One-finger double-tap again to scan another page.
Jonathan: We’ll do one more.
Automated voice: Two-finger single tap to finish scanning.
Jonathan: There we go. I’m getting pretty good at it now and it took the page straight away without needing to provide me with any guidance. Let’s listen carefully to the prompt once this recognition is done.
Automated voice: Page 3 scanned. One-finger double-tap again to scan another page. Two-finger single tap to finish scanning and open the text in the reader.
Jonathan: Now what we have to do is a two-finger single tap.
Automated voice: Reader, page 1. [foreign language]
Jonathan: I’m just pausing that because although it sounds like it’s not recognizing anything, it’s actually recognizing some text in Māori, the indigenous language of New Zealand and that’s why it’s sounding like this. Let’s see if we can move past it.
Automated voice: [foreign language]
Jonathan: I’m flipping forward.
Automated voice: [foreign language] natural congestion and deliver a public-
Jonathan: There we go and we’ll continue reading.
Automated voice: -transfer we deserve 20 congestion and deliver of public transport. We deserve a 21st-century transport solution.
Jonathan: If I wanted to read this at a later time, I can perform a two-finger tap from within the reader.
Automated voice: Export text. Double-tap to export the text to your Envision app.
Jonathan: If I do this then the text will end up in my Envision library on my smartphone and I can choose import and it will go ahead and import the material for me to read on my phone. That’s the voting material. I can actually read that online, but as you can hear, it is possible for me to read it another way. Now, I have some other material here as well, so I’ll just pick something at random from the mail and see what we can find. This one is a little card and I’m not sure what that card contains, so why don’t we just do a quick scan of it? I’ll go back.
Automated voice: One-finger swipe down again to exit. Batch scan.
Jonathan: I’ll go back-
Automated voice: Scan text.
Jonathan: -to scan text, double-tap.
Automated voice: Move document up or your head down.
Jonathan: You really get into a rhythm with this. It’s very easy to scan the document. The guidance is great.
Automated voice: Reader. Johnsonville Dental Center. Please call us today on 049-398-649 to schedule an appointment. We would like to remind you that it is time for your routine dental health examination. As you know, regular care is important to maintain.
Jonathan: I’ll stop reading that, but there’s absolutely no ambiguity about what that is about. That is a reminder that it’s dental appointment time at the Johnsonville Dental Center. It’s really crystal clear, very simple, very straightforward to get the text. There’s no doubt that when it comes to going through the mail, reading material of all kinds and prints, this thing does a sensational job. It’s very accurate. It’s very straightforward. It’s giving you good guidance in terms of positioning. With the glasses, there is no barcode recognition. You can scan QR codes, but not product codes. That’s something that the Envision app will do but the glasses do not. Let’s go back into instant text and we’re going to check all the shopping. Should we find the shopping, Bonnie?
Bonnie: Yes.
Jonathan: This is a can again. I’m not sure how well it will do.
Automated voice: [unintelligible 00:31:17] mushroom Ajax.
Jonathan: Sounds like it said Ajax.
Bonnie: There is Ajax but that’s not that can.
Automated voice: Cream of mushroom.
Jonathan: That was cream of mushroom. Is it soup?
Bonnie: Yes.
Jonathan: That’s disgusting.
Bonnie: Here’s something else.
Jonathan: Is this more soup?
Bonnie: Yes.
Jonathan: Ugh.
Automated voice: B-E-R-Y-S C-O-U-N veg. Various P-D-Y-A. B V-E-G-E-A-K. Count, veget soup. E-R-G-Y. Heading, various. C-O-U-N-T-E-R V-E-G-E-T-A-B soup A.
Jonathan: Is it vegetable soup?
Bonnie: Yes, lentil vegetable.
Jonathan: Okay.
Bonnie: I’ll put those away.
Jonathan: That’s quite impressive. Let’s see what else we can find. This is identifying the shopping here. We’ve got two cans of soup, and then we know what we ordered and that helps. Mission accomplished.
Bonnie: Yes. This might be more tricky, so it’ll be interesting to see what it says.
Jonathan: Okay, this is– oh, yum. What is this?
Bonnie: [unintelligible 00:32:19].
Automated voice: M-E-N-E-R. Water, R-A-T-I-N. Nedo, E-L-E-F. Energy-aiding [unintelligible 00:32:29].
Jonathan: Energy.
Bonnie: Yes.
Automated voice: M-O-R-N-G. Fresh. L J-E-R-C. Keto, N-G-O-I-N-L.
Jonathan: Oh, and is it keto then?
Automated voice: N-E-R-G aiding nat T. Energy rating. Water. N9YN. Energy aiding water. R-A-T-I-N.
Bonnie: It’s Matakana Super Foods
Jonathan: Oh, is it? Okay. There you go. We established that it was an energy bar and it’s keto, so I guess that helps us identify it. It’s not the same as being able to scan a product code and get all the unambiguous information about it. This is a cardboard package.
Automated voice: Store in a cool dry place. 77. Certified green coffee not from the one. Less than Jacob J-D-E. Con T-W 1-400-250-22110 [unintelligible 00:33:29]. Manufacture Jacobs [unintelligible 00:33:31] 3 Boulevard pair. [unintelligible 00:33:34] 42,160 [unintelligible 00:33:36] France importer diplomat. Distributors 1968 Ltd. German 6ST. Airport City, log 7,010,000. M-O-S-O-N. 10,000 F-S-C two trillion Zelattis. Marty. Level 1834 Passive Highway. 10 capsules [unintelligible 00:33:57].
Jonathan: Okay, so it’s green coffee, is that right?
Bonnie: Decaf.
Jonathan: And it’s 10 capsules. It says store in a dry place. That was good. That gave us a lot of information. For a package like this that’s cardboard, we can probably get a really good picture of it with the scan text, so let’s try that. We’ll go into-
Automated voice: Read. Instant text. Scan text.
Jonathan: Scan text. I suspect we’re going to get quite a lot of information doing this because it’s a simple package. It’s cardboard.
Automated voice: Reader, heading, knee. Espresso, heading. Aluminium capsules, intensity. Ristretto, D-E-CA-F-F-E-I-N-A-T-O. Heading, D-E-C-A-F-F-E-I-N-A-T-O.
Jonathan: Decaffeinato.
Bonnie: Yes.
Jonathan: Okay, that’s pretty cool.
Bonnie: There you go.
Jonathan: Okay, what is this?
Bonnie: It’s meat.
Automated voice: Cook. Two serves. Sirloin steak with garlic and herb butter. 1000. Best before [crosstalk].
Jonathan: It’s sirloin steak.
Automated voice: 15.10.22.
Bonnie: Yes.
Jonathan: It’s pretty impressive.
Bonnie: Yes.
Jonathan: You’ve got a couple of tools in the toolbox for reading text. Instant text can work when you’re just holding something up. If you want to go deeper and more accurate, the scanning app is very mature, a lot of options there for text with columns and text that doesn’t have columns. It’s a pretty good experience. No product code scanning at this point with the glasses, although you can do that in the Envision app. What would be really cool at some point is if there was some sort of product recognition feature where at least common products that are internationally available could be recognized based on a range of matches being true.
Perhaps that is something that might come later. There’s no doubt that having this hands-free experience and being able to do this is very useful. Particularly when you’re unloading shopping, you don’t necessarily want to have an iPhone or an Android device in one hand. You’ve got this hands-free experience because you’re wearing the glasses. It’s pretty cool. As we heard, we were able to deal with the mail very effectively as well.
Out for a drive and off to the mall
Because the instant text feature of the glasses is such a significant feature, we are now in the Richard Mosen mobile and we’re going to be doing a couple of things.
When I was talking to Vicki Cardona who is a happy Envision glasses user, she made the point that one of the things she really enjoys is being able to look out the window when driving. She made the point– well, she’s not doing the driving because there are enough poor drivers on the road without Vicki who’s totally blind getting in the mix. She made the point that if she holds her iPhone camera out towards the window in a vehicle, she doesn’t get the same feedback at all as she does with the Envision glasses, so this was intriguing. We’ll do that with instant text in a minute. Richard?
Richard: Yes.
Jonathan: Thanks for driving us.
Richard: No problem.
Jonathan: What do you think of these glasses in terms of visual appearance? Like if you’re walking around a shopping mall or whatever, are you going to look ostentatiously geeky?
Richard: I think a little bit because, of course, they’re based on the Google Glass which– how long ago did that come out, like maybe 10 years ago?
Jonathan: These are actually Google Glass Enterprise Edition.
Richard: Yes. I think that when these initially came out before the utility, the mind people was discovered with them, it was mainly used by very ostentatiously geeky people.
Jonathan: Yes. You can get regular frames that look like glasses, Smith optic frames. Do you think that would lessen the ostentatiousness?
Richard: Yes, I think so.
Jonathan: Interesting. I’m going to turn the instant text feature on. Richard, you’re taking us somewhere where there should be signage and stuff that we can see out the window, is that right?
Richard: Yes.
Automated voice: Lincolnshire pumping station, greater Wellington. Warning, property under survailana. Dumping of R-B-B-S-H strictly forbidden. Offenders be ready for prosecution.
Jonathan: That’s amazing. It was talking about the warning about that property.
Automated voice: [unintelligible 00:38:39]. No entry.
Jonathan: No entry?
Automated voice: Porirua, Palmerston North.
Jonathan: Okay. Is it Palmerston North– is that a motorway?
Richard: Yes, we’ve just gone on to the state highway.
Jonathan: Wow. That’s really quite something.
Automated voice: Nando’s man.
Jonathan: Oh, there’s Nando’s?
Automated voice: [unintelligible 00:39:06]. S-C-A-G-M-A. Ray White. Ray White.
Jonathan: Then Ray White Real Estate?
Automated voice: [unintelligible 00:39:13]. C-H-R-I-S-I-Y Liquor. EH 2339054. [unintelligible 00:39:20] Beers RTD Wine Spirits. M-R-T-I-Y [unintelligible 00:39:25]. T6EA [unintelligible 00:39:27]. ISU4. Mainline. City center. [unintelligible 00:39:31]. B-O-U-I-P-P-E-S. Cobham Court. E-A-U-L 32 [unintelligible 00:39:37].
Jonathan: Okay, so the Porerua City center sign, and it said Cobham Court, right?
Richard: Yes.
Automated voice: [unintelligible 00:39:46].
Jonathan: Wow.
Automated voice: NDC Kiwi ND Kiwi. Open seven days 10:00 AM to 6:00 PM. Indian [unintelligible 00:39:56] ww.wcd.co.nz. DDMISO Riua Baker, KPARB2SSNIAUTV, WAXNL Laser, Jalen [unintelligible 00:40:13] Aylon 640 Ruia bakery Filtec Fite LI [unintelligible 00:40:22] baker [unintelligible 00:40:22] Mroaange 543 Liberian dollars. No entry here. Bamily [unintelligible 00:40:32] specialists limited 120 Huni [unintelligible 00:40:35] at 333 Pacific peoples. Take [unintelligible 00:40:40] house.
Jonathan: Oh, there’s [unintelligible 00:40:43]
Automated voice: [unintelligible 00:40:48] Hadley street. [unintelligible 00:40:49] pay by plate [unintelligible 00:40:52] parking cone2 [unintelligible 00:40:55] 5:00 PM. [unintelligible 00:40:57] Hello Troy Wilkins and dub. JJET76 [unintelligible 00:41:09] Park. Peppermill PEPF NCNA [unintelligible 00:41:18] OHS, J Rubidoux. JHS 62. [unintelligible 00:41:29] Bailey’s for lease been [unintelligible 00:41:34] dental [unintelligible 00:41:35] Jim Juana 021623653.
Jonathan: That’s the Bailey’s real estate.
Richard: Yes.
Jonathan: Not too far away from the Raiwind real estate. It’s not reading everything perfectly, but there’s a lot of data coming in.
Automated voice: Harvey, Norman, [unintelligible 00:41:56] 652. [unintelligible 00:42:01] MEMIST, EMIST [unintelligible 00:42:05] STMIARENOUSF, [unintelligible 00:42:10] per policy New Zealand’s lowest food prices. Megan center LiSOF [unintelligible 00:42:19] Leeming, Grubbs Paranny. Briscoes home [unintelligible 00:42:26] LRT 940, PKZ 936. Open YBLCU [unintelligible 00:42:34] PRISOLSA Homen Wari owned and operated by New Zealanders Lewis Van [unintelligible 00:42:49] Tisan French bakery.
Jonathan: Oh, the French bakery.
Automated voice: [unintelligible 00:42:54] abandoned W Will store business CEN [unintelligible 00:42:58] Asia, [unintelligible 00:43:01] Mallannastry, Toyota LVE8, LUR 929.
Jonathan: At the moment, we are using offline mode so that means that this is not using any data or bandwidth and it’s just looking at the signs. What I have found is that every so often it seems to just go into some mode where it’s not reading anything anymore and I’ve had to go out and come back in again. It’s done that now. It’s just switched off. I heard it that time make the click. I’ll switch it back on you’ll see if it picks up. It hasn’t. This is where it seems necessary for me to go back out.
Automated voice: Home, scan texts, instant text.
Jonathan: I get back in.
Automated voice: [unintelligible 00:43:53]
Jonathan: It’s picking it up again.
Richard: We’ve driven along pretty much every street with a whole bunch of businesses in Porerua now.
Automated voice: North City, H Seishin.
Jonathan: Now we’ve reached North City, right?
Richard: Yes.
Automated voice: North City.
Jonathan: That’s the shopping mall.
Automated voice: Ground [unintelligible 00:44:10] ST.
Jonathan: There was a car park sign there.
Richard: Yes.
Automated voice: We detected that it’s too dark.
Jonathan: Oh, and because we’re now in the-
Automated voice: [unintelligible 00:44:19] lighting.
Jonathan: -underground car park, it’s saying that it’s too dark to see anything.
Richard: That surprises me because earlier some of the stores you’re looking at that looked like they were in quite like harsh shadow because it’s quite a bright day. I would have thought that the camera would have had a lot of trouble reading them but it seemed to be doing okay. We’ve arrived at North City.
Jonathan: Get out and hoon around the mall.
Richard: We’re just arriving in the mall proper.
Automated voice: Rodney Wayne, Rodney. Wayne story.
Jonathan: Oh the Rodney Wayne hairdresser?
Richard: Yes.
Automated voice: [unintelligible 00:44:57]
Jonathan: I could probably do with a haircut.
Automated voice: [unintelligible 00:45:03] service ALL [unintelligible 00:45:06] iPhone 14 [unintelligible 00:45:18] let’s talk.
Jonathan: Oh, did it just say iPhone 14?
Richard: Yes, we just passed the two degrees store [unintelligible 00:45:25]
Jonathan: [laughs] That’s a cell carrier here Two Degrees. I’m getting a bit of stuttering audio and I’m not clear about whether that’s just because of all we’re trying to do to record it or whether it’s overloaded with signage. I guess what I would describe this, I’m getting snatches of signage, but not anything reliable enough to help me navigate or really know what’s here. It actually did quite a good job of reading signage as we were moving through that was quite impressive. There was quite a lot of useful intelligence there. Here in the mall, maybe not so much.
I think what would be useful is if you go into a café or something like that, that has a printed menu, and you can line yourself up with it, whether that menu be on a piece of paper that they give you or on a board and you’re fortunate enough to be able to position yourself, then probably the scan text feature would work quite well there. Now that I’m back in the studio, a couple of reflections on that exercise. As Richard said, it was a very bright, sunny day when we recorded that and the glasses did very well reading signs as I was looking out the window. It really did feel like I was looking at what was going by.
Of course, it didn’t get everything and there were a couple of issues pertaining to the instant text just not working anymore and I would have to go out of the feature and go back in again. The other thing to note too, is that you will have heard in some of those examples I left in the recording, that it was spelling out quite a bit letter by letter. I understand why that is because signage, because it needs to be so visible, is large and so the glasses are reading that out character by character. I’m relaxed about that. If you go back into Episode 194, where we last talked with Karthik Cannon from Envision, he did make the point that he didn’t think going to the mall with the glasses would provide a lot of help.
He said that some users do get a bit of experience and can work with it there and that is quite a difficult environment. Does that mean that the Envision smart glasses are not going to be of any help to you in the mall at all? Absolutely not. Remember that the Envision smart glasses has a series of applications, if you will, on board the glasses. What we were doing there was just ascertaining how the AI features as it were, the instant text in particular, would work in that kind of environment. You still have the ability to call somebody be that a professional Aira agent if you have an Aira account, or a friend or family member using the LI feature.
I’m going to talk about both of these things extensively in just a couple of minutes. If you’re in a shopping mall like that, and you are looking for a particular store, then making a call to a human is the way to go and remember, you’ll be hands-free. That’s super convenient when you’re in a place like the mall, and the smart glasses with human assistance at the other end, are perfectly capable of giving you a lot of guidance. We can make transcripts of Mosen At Large available thanks to the generous sponsorship of Pneuma solutions. Pneuma solutions, among other things are the RIM people. If you haven’t used remote incident manager yet, you really want to give it a try.
It is a fully accessible screen reader agnostic way to either get or provide remote assistance. These days, not a day goes by that I’m not using RIM and one of the ways I use it is to either receive or provide technical support from family members. I’m a tech support guy in our family. I quite often get questions from family members that they want me to solve. It’s not realistic to expect them to install a specific screen reader even the demo. Before RIM came along, I found myself having to try and talk them through what they needed to do. Now I can tell them to go to getrim.app, that’s G-E-T-R-I-M.app.
Install a simple application on a Windows PC and just by exchanging a code word, I can have a look at what’s going on. I can either run Narrator on this system or if you’re using NVDA you don’t even have to do that. It’s an amazing tool. Do check it out RIM from Numa Solutions at getrim.app.
[music]
Call an ally
Jonathan: I’m now back in the studio to take a look at the next item on the main menu of the Envision smart glasses.
Automated voice: Call.
Jonathan: This is the call menu and I’ll double-tap to go into it.
Automated voice: Call an ally.
Jonathan: The first option is call an Ally. The way this works is that you can invite somebody who doesn’t mind assisting you with visual tasks to download the Envision Ally app, which is available in the iOS app store and in Google Play. Once the app been downloaded, the ally starts the app and registers. Once that registration process is complete, then you can add them as an ally. You do that by using the Envision AI app on your smartphone. Let’s explore the user interface for doing that. I’m in the Envision AI app on my iPhone and I’m going to perform a four-finger single tap to get to the bottom of the screen.
Automated voice: Tap bar selected. Settings tab, five of five.
Jonathan: Focus is placed on the settings tab and that’s the bottom of the tab bar. If I flick left once.
Automated voice: Glasses tab, four of five.
Jonathan: I’ve got the glasses tab and that’s the one I want so I’ll double-tap. You can actually get a lot of useful information about your glasses on this tab as well as make some configuration changes. I’m going to go to the top of the screen by performing a four-finger single tap on the top half of my screen.
Automated voice: Envision Glasses heading.
Jonathan: Now I’ll flick right.
Automated voice: Status connected, battery 56%, wifi, Man and wifi 5G button, feature preferences button.
Jonathan: This is the one we’re interested in, I’m going to double-tap Feature preferences.
Automated voice: Envision Ally, add or manage all your allies. An ally is a person that can see what you see by using the Envision Ally app button.
Jonathan: The moment that I double-tap, I’m placed exactly where I want to be and that is to manage or add an Ally. I’ll double tap and we’ll explore how this works.
Automated voice: Add or manage your allies. An Ally is a person you can make a video call to at any time with the Envision Glasses. They can answer your call through the Envision Ally app and they can directly see what the camera the Envision Glasses are seeing. Heading.
Jonathan: I’ll flick right.
Automated voice: Add an Ally button.
Jonathan: If we double-tap we can add an Ally here. Let’s explore what that interface looks like.
Automated voice: Send an invitation link to your Ally. Heading send invitation link button. Add an ally by entering their email address. Heading, email address. Text field.
Jonathan: We can add an Ally in two ways. We can send them a link or if we type in their email address here and their email has been registered as an Envision Ally, they will get a notification to say that you are trying to add them as an Ally. Is that okay? They accept hopefully if they like you, and then you are all set up to call them if you feel you need sighted assistance, and they as the explanation says as we’ll see what you would see through the smart glasses. I’ll go back to the previous screen by performing a two-finger scrub.
Automated voice: Add or manage your Allies.
Jonathan: Flick right.
Automated voice: Add an Ally. Button, head my Allies. Heading. Heidi Taylor.
Jonathan: The megastar of Mosen At Large herself, Heidi Taylor is there as an Ally. She has accepted my invitation and it goes on to give her email address. If I flick up we’ve got the actions rotor here.
Automated voice: Remove.
Jonathan: I can remove Heidi as an Ally if I want to. That’s the process of setting up an Ally that’s done from your phone. Meanwhile, back on the glasses, I’m going to double-tap the call an Ally option.
Automated voice: Heidi Taylor, Add Ally.
Jonathan: We’ve got an add Ally button. If we double-tap that on the glasses, what do we get?
Automated voice: A notification has been sent to your phone. Tap on the notification and start adding an Ally.
Jonathan: It just sends a push notification to take you through the process that we’ve already been through. I will flick down.
Automated voice: Add Ally.
Jonathan: We’ll go back.
Automated voice: Heidi Taylor.
Jonathan: I’m going to double-tap to give Heidi a call and she should get the video feed from the Envision smart glasses.
Automated voice: Connecting with Heidi Taylor.
Heidi: Hello.
Jonathan: Good morning. Welcome to the Envision smart glasses.
Heidi: Thank you.
Jonathan: What can you see?
Heidi: Right now I can see your computer monitor and it looks like Reaper?
Jonathan: [laughs]. That’s right. I’m sitting in front of the monitor in front of the mic in the studio recording this demo of the smart glasses. What do you think the quality of this is like in terms of the video feed you’re getting? How well-defined it is and also the audio that you’re getting?
Heidi: I’ll start with the audio. It’s a bit better today than when we did our first test. It’s still not amazing but it’s–
Jonathan: You wouldn’t have trouble hearing something I’m saying to you though, or anything like that?
Heidi: No. I wouldn’t. In terms of the video, I’d say it’s good for looking at things from a distance but from detail is a little bit tricky. I can see the general shapes on your screen of Reaper, but I can’t read any of the labels for example.
Jonathan: In terms of what that means for practical usage. When would you say, okay if you want help with this call me on FaceTime with your phone, if you want help with that call me with your Envision smart glasses?
Heidi: I think the Envision smart glasses I might use for like a broader area. Say you’re trying to find something in a room and just find the general location of it or you want me to look at say a shirt to see if it has a stain on it. Things like that where I don’t need the fine detail would be good. Whereas if you wanted me to read the words on the back of a device, probably the iPhone camera would be better because as much better image processing.
Jonathan: Yes, I’m going to talk about that use case in a little bit. Let’s say for example that I’m really in a time crunch and I call an Uber to a building that I’ve never been before, and I get dropped off and I have no idea where I am. I’m using my cane, I’ve got no idea where the door is anything like that. Do you think that in that case, the Envision smart glasses will be a good fit because then I could be hands-free?
Heidi: I think that would be a really good use case for it because we’d just be able to say, “Oh look to your left, look to your right,” and then you’d be able to line up with which way you were looking and walk in the correct direction to find the door. I could tell you which side the handle was on. For example, if the building had like big enough signs inside, I could probably read the signs. It’s the details that are harder to see.
Jonathan: Is there anything where you would think the glasses would do a better job than the iPhone? For example, if I’m in that scenario that we just talked about, do you think we could achieve results that are just as good with my iPhone? Particularly if I have an iPhone 14 with the wide-angle lens and all that stuff, would the glasses be superior or just about the same than using the iPhone?
Heidi: I think from my end it probably be very similar, but for the person with the phone or the glasses, it could be a very different experience depending on like if you really need that extra hand free for example, because if say you put the phone in a shirt pocket with the camera facing out, I could maybe see the stuff but there wouldn’t be the fine control for me to see the details around. Or if you could hold your phone with one hand and use your cane with the other and not need an extra hand free, I could get the fine control.
Jonathan: If you are holding your phone out in front of you and you’ve got a cane or a dog harness in the other hand then it can be quite hard to locate doors and that sort of thing but if you’re completely hands-free, I suppose you could use one of those harness gadgets that they have, chest harnesses that some people strap their iPhones into that people got really into when the Aira Horizon glasses window way, but that takes a bit of effort I suppose.
Heidi: Yes, and also especially if we’re talking about a building, and we’re just trying to identify if it’s the correct building. If it’s like a chest harness it’s probably really hard to point the camera up to check the sign on the building. Whereas with the glasses you could be told to look up and we could see stuff at the higher level and check that’s the right sign and things like that.
Jonathan: The video feed is it grainy? Is it good enough quality, you have some confidence that if I was asking you to help me look for signage in that way that you’d get a good quality pic?
Heidi: It’s a little bit grainy but not outrageously so, it’s just not iPhone-level quality I think is what I’d have to say. I think it’s very usable unless we are looking for very fine details in which case the iPhone is probably superior.
Jonathan: What was the experience like registering as an Ally?
Heidi: It was really straightforward. I don’t remember having any difficulty whatsoever. It was a couple of weeks ago now wasn’t it, or a week ago? I don’t remember the process, but the fact that I don’t remember it is probably a good thing because it was so simple.
Jonathan: Thank you very much for sharing your experiences.
Heidi: You’re very welcome.
Jonathan: Okay you can hang it up now because I want to hear what the sound is like when you hang up.
Heidi: All right. Goodbye.
Jonathan: Goodbye.
Automated voice: Call ended. Heidi Taylor.
Jonathan: There she goes. That is Heidi on the other side of town. Able to see that I am recording this for you in Reaper. She sees the screen. She couldn’t really read what was on the screen.
Call and Aira agent
That takes us on to the next item in this call menu. I’ll swipe down to go back to the previous menu.
Automated voice: Call an Ally.
Jonathan: Swipe forward.
Automated voice: Call an Aira agent.
Jonathan: Here’s call an Aira agent. The moment that Envision announced these smart glasses, there was intense interest on whether Envision and Aira, the Visual Interpreter Service would form a partnership. For those not familiar with the backstory, Aira had three generations of smart glasses in its infancy. It used Google Glass, I think it was probably a previous generation to the one that I’m using now with the Envision smart glasses. It also had glasses called the Austria glasses. Finally, it had Horizon, which I don’t think were particularly popular because it required you to cable the glasses to a dedicated Samsung smartphone device.
It’s not unusual for startups to go through various business models until they find one that is sustainable. Eventually, Aira came down from some giddy heights to focus on its core business which is providing professional Visual Interpreting Service. They felt it appropriate to get out of the hardware business to essentially streamline their company’s operations significantly. They rose off the Horizon Glasses and stopped supporting them. This led to an interest among Aira customers in technologies like the chest harnesses that we were just talking about with Heidi. There was nothing like the glasses really.
Being able to wander through an airport you’ve never been to before or a building you’ve never visited before with the assistance of a professional agent giving you advice. For me at least, having an iPhone strapped to your chest just wasn’t the same as having glasses. People have been hoping for a while that Aira and Envision would partner up and bring Aira to the smart glasses. Aira is now in an expansion phase again, after having been in a consolidation, perhaps a recovery phase for some time. That’s exciting to see.
We’ve seen a number of exciting product initiatives from Aira this year, including the new web interface for Aira which I use a lot. If you double tap on call an Aira agent, and you haven’t registered your Aira account yet, the Envision Glasses will prompt you through the process of doing so. That process involves going to the website and scanning a QR code. I must confess that that process made me go huh, I was pretty skeptical about imposing that process on the blind community because I know that a lot of blind people do have trouble with QR codes.
If you’ve got a high degree of comfort with your phone, you can manage QR codes, but many people struggle. I have to say this was a very simple process. I sat here in my studio where I’ve got a monitor in front of me and I made sure that the monitor was on. I went to the Aira website and found the appropriate place. The Envision smart glasses just picked up the QR code without any effort at all. I now have Aira on my Envision smart glasses. I know it’s become a bit of a cliche to talk about the right tool in the toolbox when it comes to blind people and assistive technology but it really is true in this case.
Although people complained a lot about the process of using the Horizon Glasses, one thing that Aira spent a lot of time on with that generation of glasses was the quality of the picture, the video image. I think there is still some work possibly to do for some use cases. I understand that Aira and Envision are continuing to talk about the video feed that Aira is receiving from the smart glasses. I’ll give you an example. In my real-world testing, while I’ve been demoing these glasses that illustrates this point. I had a Sonos Port, and on the day that the Queen died, my Sonos Port died too, necessitating me buying a brand new Sonos Port.
Sonos Port is one of the few Sonos devices where they’ve got a pretty inaccessible setup process. That process involves entering an eight-digit code that is printed on the back of the unit. My understanding is, the writing is not particularly clear for sighted people either. I could not proceed with the setup of my new Sonos Port device until I knew what that eight-digit code was. I grabbed the Envision Glasses and called an Aira agent and I said, “What I’d like you to do please is tell me this eight-digit code and email it to me.” I have the code on file in the future. It’s a great Aira task, but she was having a lot of difficulty seeing the eight-digit code through the glasses.
She took various pictures. She got me to get closer to the device to look in slightly different directions but we were having a lot of difficulty gaining confidence that she had the eight-digit code correct. I said to her, “Would it be easier if I call back using my iPhone.” She said, “Let’s try that.” It was a lot easier. She was able to send me the code without any fuss. Now, on the other hand, I have also traveled with the Envision Smart Glasses and it’s wonderful. I feel I’ve got a lot of Aira’s utility bag that I lost when the Horizon Glasses were decommissioned.
For me, this is a very significant thing because as a blind person with a significant hearing impairment, I don’t do well in unfamiliar environments. It could be really noisy. My echolocation can’t be relied upon like it used to be. That was one of the things that I really enjoyed about Aira. When the glasses were around, I felt it had given me a lot of liberty, a lot of independence, and a lot of confidence back. With the Envision smart glasses, you definitely get that.
If you do a lot of travel, and you would like some assistance, if you are in a store, or a mall and you’re looking for signage, then this is where the Envision Aira partnership is hitting it out of the park. People will tell you who’ve had these glasses for some time that Envision has demonstrated a commitment to continuous improvement. I’m confident that this will only get better. If you’ve missed the Horizon smart glasses, you definitely want to consider giving the Envision smart glasses a try with Aira.
As I record this, when I scan that QR code, I was credited 200 additional minutes to my Aira account. That has certainly assisted me to play with the glasses and explore what they can do. Of course, you’ve got something with the Envision smart glasses that you never had with the previous generation of Aira glasses. That is that if you’ve got friends or family members to hand who don’t mind helping you, you can save minutes by asking them to assist you and they can get the feed from the smart glasses.
You couldn’t do that with Aira before. Now keep in mind that Aira has additional information as well. They can do GPS data. They can do all sorts of other things that a regular Ally can’t. For many functions, if you just want somebody to assist you, then you’ve got the ability to do that too. It gives you a lot of potential. I think the combination of these visual interpretations/ally features, and the way that Envision handles printed materials so easily, you really start to get a pretty compelling value proposition for many people with these glasses.
Describe Scene
Let’s go on to the next item. I’ll swipe down.
Automated voice: Call identify.
Jonathan: This is the identify menu, and I’ll double tap. We’ll explore what’s here.
Automated voice: Describe scene.
Jonathan: If you’re familiar with the Envision AI app, you’ll be familiar with this option. This is describe scene. The way that this works is that the glasses takes a picture. It’s a snapshot of a single point in time. It analyzes that picture and describes to you what is in the picture. I thought it would be interesting to take a picture with the glasses here in the studio and have it describe the scene and then take a picture with my iPhone camera roughly at eye level and see if there’s much difference in terms of what we get back.
Also looking at processing time, does it take significantly more time with the glasses than it does with the phone? At the time that I’m putting this part of the demo together. I’m still rocking my iPhone 12 Pro Max. I’ll double-tap the describe scene option on the glasses.
Automated voice: A computer monitor and a speaker.
Jonathan: Let’s analyze what happened there when I double-tapped the option. You heard some beeps and that was counting down giving you a chance to look where you want it to look before the picture was taken. You heard the camera shutter sound and then it went and analyze the scene. It told me that there’s a monitor in front of me and it is seeing one of the speakers that are on the wall. In the studio here we have a couple of play fives and a Sonos Sub.
Let’s try and repeat that with the iPhone and immediately what I’m finding is that it’s far more inconvenient because I had to press some buttons in Reaper to change what you’re hearing. I’ve only got one hand because of course I’m holding my phone with the other hand. That is an immediate negative for the iPhone and a plus for the glasses.
Automated voice: Describe scene button.
Jonathan: I’m going to double-tap.
Automated voice: Selected. Describe scene dimmed.
Probably a desk with a computer.
Jonathan: A couple of things to note about that. First of all, the glasses took longer by a couple of seconds. It was about 10 and a half seconds from when the shutter went off to when we got the description. It was about eight and a half seconds from when the shutter went off to when the iPhone 12 gave us a description of the scene. The glasses gave me a much more accurate description of what I would be looking at. Even though I held the iPhone at eye level, so the camera was roughly where the glasses camera is.
The scene description picked up in the app on the desk below rather than what I would be looking at if I were a sighted person, which is the monitor in front of me and one of the speakers that is in the field of view of the glasses. In terms of how you might use this feature, this is not a real-time thing. As I said, it’s a snapshot in time. You might walk into an unfamiliar room and just think, “Okay, I wonder what’s in this room? What does it look like?” You can take a quick snapshot and the glasses will tell you what they’re seeing, what the room contains within the field of view of the glasses. If I were to tilt my head a bit down, so I’m now looking at the keyboard that’s in front of me. If I were sighted, I’m sure I’d be looking at these keys now. I’ll double-tap.
Automated voice: Describe scene. A keyboard with wires in a computer.
Jonathan: There you go. It’s even picking up the wires. It’s picking up the computer that’s on the left, so it’s much more directional. It’s picking up what you are actually looking at and that does have quite a bit of utility. I’m going to swipe down now-
Automated voice: Describe scene.
Jonathan: -and swipe forward to go to the next option.
Detect light
Automated voice: Detect light.
Jonathan: This is your classic light detector option. Let’s listen to the description of it if I tap and hold.
Automated voice: Detect light. Detect the intensity of light in your current environment. Audio cues help you determine how bright it is around you. Open the context menu to switch between frequency of beeps and pitch of tone. A lower pitch or frequency of beeps means no light is being detected. Do a one-finger double tap to start detecting light.
Jonathan: Let’s do that. I’m not going to make any adjustments. I think there’s quite a bit of natural light coming into the studio at the moment. It’s not hugely lighting out [unintelligible 01:12:40]. I’m going to close the curtains.
Automated voice: The higher the pitch of the tone, the more light that is being detected. A very low pitch means it is very dark.
Jonathan: The curtains are closed and I’m just going to turn on the lights in the studio if I can find my phone and save the jolly thing. Turn on the studio lights.
Automated voice: The studio ceiling is on.
Jonathan: I’m looking around.
Automated voice: You have 76 seconds.
Jonathan: Now I can say turn off the studio light.
Automated voice: The studio ceiling is off.
Jonathan: Now if I open the curtains.
Automated voice: [unintelligible 01:13:47] 76 seconds.
Jonathan: Now we’ve got a lot more light you can hear that’s gone up significantly. That is the light detector on the Envision smart glasses. You also have this on the Envision AI App.
Automated voice: Detect light.
Jonathan: There’s another way as well to get this information presented to you audibly. If I perform a two-finger single tap for the context menu.
Automated voice: Pitch of tone.
Jonathan: We’ve got pitch and tone.
Automated voice: One finger double tap to switch to frequency of beeps. One finger swipe down to confirm.
Jonathan: We’ll double tap.
Automated voice: Frequency of beeps.
Jonathan: Let’s see what that is like.
Automated voice: One finger double tap to switch to tone pitch. One finger swipe down to confirm. Detect light.
Jonathan: What we’re hearing now is the frequency of beeps. If I look out the window. Oh, wow. That’s much more rapid.
Automated voice: The higher the frequency of beeps, the more light that is being detected. A very low frequency of beeps means it is very dark.
Jonathan: I’m going to close the curtains. The frequency’s gone way down. I open the curtains again, I look out the window, let the sunshine in. Face it with a grin. There we go.
Automated voice: Detect light.
Jonathan: That’s the light detector option. Let’s flick forward.
Automated voice: Detect light. Recognize cash.
Recognise cash
Jonathan: The next one is recognize cash. This is an implementation of the excellent Cash Reader App. We’ve talked about this app on Mosen At Large in the past. I think it’s a superb app and it’s built right into the Envision smart glasses. I’m excited to be able to tell you that Bonnie’s actually given me some money so I can demonstrate this feature. I’m going to double-tap.
Automated voice: Searching for New Zealand dollar.
Jonathan: It knows that I’m in New Zealand, so it’s going to search for New Zealand dollars. I’ve got a note here. I don’t know what it is. I’ll hold it up to the glasses.
Automated voice: NZ$5.
Jonathan: NZ$5.
Automated voice: NZ$5.
?Speaker: There we go.
Automated voice: NZ$5.
Jonathan: I’ll take the note away because it will continue to repeat it. I’ll just pick another bit of currency that Bonnie’s given me here and hold it up to the glasses.
Automated voice: NZ$10.
Jonathan: Right away we have NZ$10. It is nice and responsive. Pick another one here and see what we have.
Automated voice: NZ$10.
?Speaker: That is another-
Automated voice: NZ$10.
Jonathan: -NZ$10 note. I have one more bit of currency here I’ve been given.
Automated voice: NZ$5. NZ$5.
Jonathan: That’s what we have. That’s very efficient. No problems or complaints to report with that at all. It works very well. What happens if you’re traveling, if you want to recognize other currency? I’ll go back to the previous screen by swiping down.
Automated voice: Recognize cash.
Jonathan: Now I’m going to go to the context menu by performing a two-finger tap.
Automated voice: New Zealand dollar selected.
Jonathan: New Zealand dollar is selected. If I go forward.
Automated voice: US dollar, Euro, Japanese Yen, British pound, Australian dollar, Europe, North America, Central America and the Caribbean, South America, Asia, Africa, Middle East, Oceania.
Jonathan: You have a number of common currencies that you may want to recognize at the top and that’s probably a combination of your geographical location and also common popular currencies like US dollar, Sterling, and the Euro. Then you have a series of menus where you can drill down by location and find a currency. A very simple and effective powerful tool. I will give Bonnie her money back when the demo is over.
Scan QR code
Let’s continue through the Identify menu. The next option is.
Automated voice: Scan QR code.
Jonathan: QR codes have been around since 1992, so they aren’t new technology and you do find them popping up in interesting places. Recently when I was traveling in Europe, I found them a lot at museums and other places. They can direct you to websites. Sometimes if you have an eSIM from your carrier, they can help you to add that eSIM to your phone. They can do all kinds of things. In fact, at the ABBA Museum in Stockholm, I was able to scan a QR code and that started the audio guide on my phone.
It just popped up in Safari. One of the use cases that’s particularly exciting is that with the scan QR code feature, you may be able to replace standalone products or apps that you use for identifying products. Many of us pick a particular technology or an app and we label clothes and items in the kitchen, things that we need to recognize easily. It may be that you run an app to do this. It could be that you have some sort of standalone device to do this. This labeling thing has been around for quite a long time but QR codes are an open generic standard.
Anybody can generate them with the right website and you can give them whatever textual information that you like. At the moment, the QR code reader in the smart glasses works with text-based QR codes. That means that you could put them, for example, on doors, on elevator floors, they would work with BlindSquare where BlindSquare is also reading QR codes. There is a range of things that you can do. I do hope eventually though that the QR code reader could at least do URLs and take you to a website.
The way I see that working is that you could scan a QR code perhaps in the environment that I was talking about at a museum or perhaps there’s something advertised where there’s a QR code that take you to our website. Then it could send that to the Envision app on your smartphone so that you could then go to that URL on your smartphone. No doubt there is plenty of utility with the QR code reader as it stands.
How do you make these? There are plenty of websites to do it. The one that I prefer, because it seems to me to be the most accessible that I’ve found in terms of generating these codes is a website called qrd.by. Yes, it’s an interesting URL but that’s what it is, qrd.by. I’m on the qrd.by page now I’ll verify that and using JAWS, I’ll press the JAWS key with T.
Automated voice: QR code generator create QR code for free Microsoft Edge.
Jonathan: By default, the website will generate URLs. I think that’s probably the most common QR code that there is. We want it to generate text QR codes that are compatible with the glasses. I’ll invoke my links list by pressing the JAWS key with F7.
Automated voice: Links list dialogue.
Jonathan: Press the letter T.
Automated voice: Trackable text 16 of 134.
Jonathan: The second link that starts with a letter T and the 16th link on the page is text, so I’ll press it to activate it. I’ve done that, we get no feedback. Now when I go to the edit field on the page by pressing E.
Automated voice: Type any text..edit.
Jonathan: It confirms that we’re generating a text QR code by saying type any text. I’ll press enter to turn forms mode on.
Automated voice: List with one items, type any text …edit, one of one.
Jonathan: I’m going to type lemon ginger tea. I’ll just confirm that with a say line.
Automated voice: Lemon ginger tea.
Jonathan: That’s all there is to it. Now I want to be able to download this QR code. I’ll turn forms mode off.
Automated voice: Virtual PC, list link, download QR code.
Jonathan: Right under the edit box there is a download QR code link. I’ll press enter to activate it.
Automated voice: Download QR code link.
Jonathan: Now if I down arrow.
Automated voice: List of seven items, heading level three vector, link graphic download QR code apps. Link graphic download, QR code pdf.
Jonathan: There’s the one that I like to use. Download QR code pdf. I’ll press Enter.
Automated voice: List with seven items. Download QR code pdf link graphic, QR code generator, create QR code for free. Downloading static QR code.pdf. 4.9 kilobytes. Done.
Jonathan: It is a very small file. It’s just a 4K file, a PDF containing my QR code. If you were listening closely, you’d have heard that it’s giving it quite a generic name. The next thing I want to do is go into File Explorer. I’ll do that by pressing the Windows key with E.
Automated voice: Home, items view list button.
Jonathan: Now I’ll go to my download, and press Enter.
Automated voice: Downloads. C user last today expanded, static QR code.pdf.
Jonathan: That’s called static QR code.pdf. Since I’ve been using the Envision Glasses, I’ve made a few of these QR codes. I’ve developed a convention and that is that all these files which I keep start with QR, a space, and then what they are. I’ll press F2 to rename the file.
Automated voice: Edit. Static QR code.pdf.
Jonathan: I’m going to type QR space lemon ginger tea and press enter.
Automated voice: Items view multi-select list box. Today expanded QR lemon ginger tea.pdf.
Jonathan: I’ll press enter to open that file.
Automated voice: QR lemon ginger tea.pdf, Adobe Acrobat Reader DC 64 bit. QR code generator. Create QR code for free document. Alert, empty document dialogue. This document may be a scanned image which makes it difficult or impossible for JAWS or fusion to read without first scanning the document with OCR. Would you like JAWS to OCR the document now? Yes, button Alt+Y.
Jonathan: This is a message from JAWS that tells me there’s an image on the screen. There’s no text that it can detect and JAWS is offering to scan that text for me using optical character recognition. I don’t need that in this case because it’s a QR code, so I’ll tab to No.
Automated voice: No button Alt+N.
Jonathan: Press Enter.
Automated voice: Document. No links. Alert, empty document. Document.
Jonathan: Now if all is going well, the lemon ginger tea QR code should be displayed on my computer monitor in front of me. Let’s go back to the glasses.
Automated voice: Scan QR code.
Jonathan: I’ll double-tap.
Automated voice: Looking for a QR code reader lemon ginger tea.
Jonathan: There it is just right away because I’m looking at the screen and the QR code is visible on the screen, it just popped right up and it said lemon ginger tea. Now all I have to do is print that out and affix it to the appropriate box. You may need some sighted assistance with that or you may not because with boxes like tea, it can be pretty easy to identify those sorts of things with the Envision smart glasses. It will certainly be quicker if you’ve got QR codes on the products.
You may want to cut the QR codes out because you don’t want a single massive big A4 page that just says lemon ginger tea and a tiny QR code. It may be something that you would want some help setting up with a friend or family member or someone who assists you so you can just cut that QR code out and affix it to the box. Once it’s done, it’s done, and if you get new ginger tea, you can keep these PDF files and just change the barcode or print out other ones as you require.
Detect Colours
The next item on the identify menu is this one.
Automated voice: Detect colors.
Jonathan: We can always find out what the option does, although this is pretty self-explanatory by performing a one-finger tap and hold.
Automated voice: Detect colors, look at the things around you and identify their colors.
Jonathan: I’ll double-tap.
Automated voice: Black.
Jonathan: I’ll look around.
Automated voice: Gray.
Jonathan: I presume.
Automated voice: White.
Jonathan: Okay, just looking at the curtains now.
Automated voice: Gray.
Jonathan: Turn around.
Automated voice: Dark gray, light blue.
Jonathan: Look at the carpet.
Automated voice: Gray. We detected that it’s too dark. To get the best results make sure that there is enough lighting.
Jonathan: All right, well I do have natural light coming in, but I was looking down. I believe I am wearing a blue shirt at the moment, so I hold my arm up to the glasses.
Automated voice: Dark blue.
Jonathan: There we go, and it tells me it’s dark blue.
Automated voice: Teal.
Jonathan: If you are identifying clothing, you want to find out the color of something, you hold it up to the glasses and when you’re in this mode it will instantly tell you the color of the item in front of the glasses camera. Those are the items on the identify menu.
Find objects
We’re going to return to the main menu and look at the next major menu category.
Automated voice: Find.
Jonathan: That is the Find menu, so I’ll double-tap to explore what’s in the Find menu.
Automated voice: Find object.
Jonathan: The first is Find objects. Let’s have a listen to the explanation of this feature.
Automated voice: Find object, find and locate specific objects in your surroundings. Choose objects from a preloaded list to have an audio cue play when the object is in front of the camera. Use this to look for your misplaced bottles, laptop, coffee cup, and more. A sound will be played as soon as the object is detected. For the best results, move your head around slowly. One finger swipe forward and back to switch between objects to search for.
Jonathan: I’m going to double tap.
Automated voice: Bench.
Jonathan: Bench is the first one. I’ll just swipe through these.
Automated voice: Bicycle, bottle.
Jonathan: As you can hear, they are in alphabetical order.
Automated voice: Car, cat, chair, dog, keyboard.
Jonathan: All right, there we go. [laughs], I’m sitting right in front of a keyboard, so if I look away and then I look back and there’s the keyboard. It’s definitely detecting that. No problem at all.
Automated voice: Laptop.
Jonathan: Okay, I’ve got the Mantis here. Well, that maybe not because it doesn’t have a screen. I don’t have a laptop in the studio at the moment.
Automated voice: Motorbike, sofa.
Jonathan: I should just say I definitely do not have a motorbike in the studio, unfortunately. I mean I normally do, but not today.
Automated voice: Table.
Jonathan: We’ve got a table. I mean there’s a big desk here, but maybe it doesn’t qualify, so it’s not pinging when I look at the table.
Automated voice: Toilet, traffic light.
Jonathan: That is obviously very interesting. I suppose the thing that I think about though is when you are walking around, you probably want instant text on. One of the impressions that I’m forming is there really does need to be a combination of these things. I mean, I’d like it to be able to recognize traffic lights, but not at the expense of losing access to hearing about signage that’s around me. I think that some of these modes need to be combined or at least give you the option to combine them.
Automated voice: Train.
Jonathan: That’s the list. There are some curious omissions here. It doesn’t, for example, have some of the items that you might misplace regularly, like a wallet or keys or your cell phone, but you can get some of that information by going into the explore mode. It can be a bit of a learning curve, therefore to work out which mode to use. The way I describe this is the Find object narrows the view to a specific thing, whereas when we get to explore, you’ll find that it just tells you what it sees and you don’t have to specify what it is that you are looking for.
Find people
The next option on the Find menu is-
Automated voice: Find people.
Jonathan: Let’s have a listen to the explanation of this. Find people feature.
Automated voice: Find people. Detect if there are people around you or find a specific person. This feature works by using a video feed to scan for people. A beep plays when a person is detected and the name of the person will be spoken out if their face has been taught in the Envision app. Use this to locate people around you. Find friends, family or colleagues in a public space. For the best results, move your head around slowly.
Jonathan: They’ve done an exceptional job of documenting the whole ecosystem as it mentioned there. If you’re going to make this work, you have to teach the Envision app about the faces you want to recognize. That is also a way of gaining the consent of those who are being subjected to facial recognition, because you’ve got to take a series of photos of them and that will require you to explain why you are doing that. In a way that’s a good thing for privacy reasons. Let’s just explore the user interface that governs this and to do that, we’re going to go back to the iPhone and take a look at feature preferences in the Envision Smartphone app. Remember, this is available for iPhone and Android. The place we need to be in that app is here.
Automated voice: Glasses tab four of five.
Jonathan: I’ll double-tap. Now if I go to the top of the screen, we have all information about the glasses present.
Automated voice: Envision Glasses heading, status, connected, battery 89%, Wi-Fi man and Wi-Fi, 5G. But feature preferences button.
Jonathan: Feature preferences is where we need to be. I’ll double-tap.
Automated voice: Envision glasses, back button.
Jonathan: Flick right.
Automated voice: Feature preferences heading, Envision Ally. Add or manage your allies. An ally is a person that can see what you see by using the Envision Ally app button.
Jonathan: Previously in the demonstration we explored this.
Automated voice: Favorites. Add features to favorites and quickly access them on the Envision Glasses by swiping back on the home screen. Button.
Jonathan: We’ll talk about that a little later.
Automated voice: Teach a face. Teach faces of a friend or a family member and have them recognize with the Envision glasses. Button.
Jonathan: That’s what we’re interested in. We’ll double-tap and see how this interface works.
Automated voice: Teach a face, take five pictures of your friend’s face and enter their name to add it to recognizable people. Button.
Jonathan: What else is on the screen? But we can have a look at those faces that it knows.
Automated voice: Recognizable people, heading. You currently do not have any faces taught. Tap on, teach a face to teach envision to recognize a new person.
Jonathan: We’ll go back and have a look.
Automated voice: Recognizable people, heading. Teach a face. Take five pictures of your friend’s face and enter their name to add it to recognizable people. Button.
Jonathan: I’ll double-tap.
Automated voice: Restart button.
Automated voice: Face detected.
Jonathan: I’m going to flick right.
Automated voice: No face detected.
Automated voice: Teach faces back, but switch to front camera.
Automated voice: No face detected.
Jonathan: I’m going to switch to the front camera.
Automated voice: No face detected.
Automated voice: Switch to back camera. Add five more images-
Automated voice: No face detected.
Automated voice: -add take photo, dimmed button.
Automated voice: Face and frame, take a picture.
Automated voice: Teach faces, add take photo button.
Automated voice: Face and frame [01:33:19]. Take a picture.
Automated voice: Add four more images.
Automated voice: Add four more images.
Automated voice: Face and frame. Take a picture.
Automated voice: Take photo button, take photo.
Automated voice: Add three more images.
Automated voice: Teach faces back. Add three more images, dimmed,
Automated voice: Face and frame. Take a picture.
Automated voice: Teach add three, take photo button.
Automated voice: Face and frame.
Automated voice: Add two more images.
Automated voice: Teach faces, back button, add two more. Take photo button.
Automated voice: Face and frame.
Automated voice: Take a photo.
Automated voice: Add one more image.
Automated voice: Add one more image, dimmed. Take photo.
Automated voice: Face and frame.
Automated voice: Take photo.
Automated voice: Face and frame. Take a picture.
Automated voice: Teach face, back button.
Automated voice: Face and frame.
Automated voice: Take photo, heading. Switch to back camera. Restart, take photo. Done button, done button. Teach a face alert, enter name.
Jonathan: I’ve got Braille screen input here. We’ll just enable that.
Automated voice: Touch Braille screen and press dot 6. Tap J-O-N-A-T-H-A-N Jonathan. M-O-S-E-N. Portrait Alert. Processing. Alert. Teaching successful. Okay, button. Jonathan Mosen.
Jonathan: That’s just to show you the user interface because clearly there’s not much value in teaching it your own face if you’re wearing the glasses, but there was a lot going on so I decided not to talk over it. Let me explain what was happening there. You have to take five pictures and they suggest that you try and vary the angle a bit so that the face can be seen from various angles. It is pretty accessible, you get told when there’s a face in the frame, you can use the front camera or the back camera and the take picture button is towards the bottom of the screen. That button will be dimmed when there is no clear view of a face in the frame, so that gives you some confidence.
What we heard there were two voices. I’ve currently got the Australian voice, Karen, as the voice in my Envision app. You could also hear Siri talking with one of the US female Siri voices. The way that you would usually do this, I think, is that you would use the back camera, you would say to someone, “Look, I want to be able to recognize your face in a crowd so that I can avoid you, if Envision tells me that you are there.” No, you probably don’t want to say that. So that I can find you if Envision says that you are there, is it okay and you take those five pictures.
Many people will be able to do this on their own and there’s a lot of good guidance here, but to get the maximum number of angles so that this feature is as reliable as possible, you may want to consider getting a sighted person to take the pictures. I think that’s why the front facing is such a good idea, because if you hand your phone over to someone and say, “Look, could you please take a picture of your face at various angles?” And they’re willing to do that for you, then that’s probably going to give you the most reliable results.
Explore
When you’re using the glasses and you’re moving around, and you want to know which of your friends and family members or even colleagues are in a room, the next item on the find menu is?
Automated voice: Explore.
Jonathan: It’s again, have a listen to the description of this feature from Envision.
Automated voice: Explore. Explore what’s around you in real-time. This feature uses a video feed to speak out objects and people it detects. It is a combination of the find objects and find people feature. Use this to explore a new environment and get to know what’s around you. For the best results, move your head around slowly.
Automated voice: Imagine this, you walk into a room, it’s an unfamiliar room and you’re told about where chairs are, where other objects are. You are also told about people that the Envision ecosystem has been trained to recognize. The big difference between this and the find object feature, which we explored previously, is that you don’t have to specify what it is that you’re looking for. This will describe things as they come into a video feed. Those are the items on the find menu.
Automated voice: Transcripts of Mosen at large are brought to you by Pneuma Solutions, a global leader in accessible cloud technologies on the web at pneumasolutions.com. That’s P-N-E-U-M-Asolutions.com.
Device settings
Jonathan: The next item on the main menu is this.
Automated voice: Device settings.
Jonathan: Let’s double-tap and see what’s configurable and the device settings.
Automated voice: Audio.
Jonathan: There’s a sub-menu here governing audio. I’ll double-tap.
Automated voice: Change volume.
Jonathan: You can also do this by performing a two-finger flick up and that is a shortcut into the volume options.
Automated voice: Change speed.
Jonathan: You can change the speed of the voice. Let’s see how much flexibility there is. I’ll double-tap, and I’m getting no feedback, but if I now flick up and down.
Automated voice: 20%, 25%, 30%, 35%, 40%, 45%, 50%, 55%, 60%, 65%, 70%, 75%, 80%, 85%, 90%, 95%.
Jonathan: That’s as fast as it goes.
Automated voice: 90%, 85%, 80%, 75%, 70%, 65%, 60%, 55%, 50%, 45%, 40%, 35%, 30%, 25%, 20%, 15%, 10%, 5%.
Jonathan: 5% is as slow as it goes.
Automated voice: 15%, 20%, 25%.
Jonathan: I’ve put it back at 25% and I’ll swipe down to get out of this option.
Automated voice: Change speed.
Jonathan: We’ll flick forward.
Automated voice: Change voice.
Jonathan: We can change the voice. There are several available. The one that I am using for this demonstration is not the default, but it is the one I prefer. Let’s find out what other voices are available by double-tapping.
Automated voice: Number one, currently selected.
Jonathan: That is voice number one. I will flick forward, you’ll hear the different voices.
Automated voice: Number two, number three, number four.
Automated voice: Number five.
Automated voice: Number six.
Automated voice: Number seven.
Automated voice: Number eight.
Automated voice: Number nine.
Jonathan: Let’s flick down to get out of here.
Automated voice: Change voice.
Jonathan: That’s the last item in this audio menu. We’ll go back to the previous menu.
Automated voice: Audio.
Jonathan: Flick forward.
Automated voice: Wi-Fi connected to Man and Wi-Fi 5G.
Jonathan: Connecting the glasses to Wi-Fi is important for quite a few of the tasks. For example, if you are going to call an ally, if you’re going to call an Aira agent, you will need a Wi-Fi connection. If you want to use instant text and the languages that you use are available offline, you won’t need the Wi-Fi connection for that. Using the instant text in offline mode will be less laggy so you’ll get more instant, instant text. At the moment, if you want to read documents, then those documents do go to the cloud, so you will need an internet connection for that. The functionality of the glasses is significantly impeded if you don’t have an active Wi-Fi connection. This is a really important thing to get right. I’m going to double-tap.
Automated voice: Search for Wi-Fi connections.
Jonathan: I’ll flick forward.
Automated voice: One-finger double-tap to disable Wi-Fi. Keep in mind that many features require an internet connection.
Jonathan: If you were going to use instant text in offline mode, then I guess disabling Wi-Fi, because it’s disabling that radio, would potentially prolong your battery life a little bit at the risk of having a lot of functionality unavailable to you. I’ll flick back.
Automated voice: Search for Wi-Fi connections.
Jonathan: We’ll double-tap.
[sound]
Automated voice: Man and WiFi. One-finger double-tap to connect to this network.
Jonathan: That’s my 2.4 gigahertz network.
Automated voice: Mohsen Tower’s guest. One-finger double-tap to connect to this network.
Jonathan: That’s our guest network, which we give to people who we don’t want to have access to our smart home peripherals or our network.
Automated voice: You are currently connected to Menand Wiffy 5G. To forget this network, do a two-finger single tap. Hidden. Fully Fam. Hunter Land.
Jonathan: We’re seeing a few networks that belong to our neighbors at this point. When you set up your Envision smart glasses, you are taken through via the Envision app, the process of connecting to your Wi-Fi network, and if you’re just going to be using these around the house or around the office and you only have one network to connect to, that’s all you’ll need to do. Chances are you’ll want to add other networks, particularly if you want to use these when you’re out and about.
For that, you’ll want to add some sort of hotspot. Both iOS and Android offer a wireless hotspot feature. The iOS implementation of personal hotspot is quite finicky. For example, even when you’ve given permission for a device to access the feature, if you want to make that connection, you do have to be on the personal hotspot screen feature of your iPhone. Of course, it does have to be enabled. It is a little easier with Android. If you are going to make extensive use of the Envision Glasses when you’re on the move and you’re an iPhone user, it absolutely can be done if you have some confidence with your device, there will be a bit of battery drain of course as well. What you may like to consider is getting a dedicated personal hotspot device. You can get these from cell phone carriers pretty cheaply and you put a SIM card into these devices. They have a dedicated data connection. You can just carry it around in a backpack, a purse, a pocket, and have that dedicated internet connection for your Envision smart glasses.
Early adopters of Aira will be familiar with this because in the first couple of generations of glasses this is exactly how most people used them. There was a hotspot device that Aira would provide you and that was how you got your data when you were on the move. It would be a similar setup, but I do want to stress that’s not essential. If you can tolerate the idiosyncrasies of iOS’s personal hotspot feature, it is absolutely doable. Hopefully, most of the Wi-Fi networks that you are connecting to are secure, and so there is a one-off process of getting the envision glasses connected to your Wi-Fi network.
There are two ways to do this and I want to cover both of them. If you prefer to do the whole thing from the glasses without involving the Envision app at all, that is possible. As we heard in that menu, you can scan for available Wi-Fi networks. If you double-tap one and you can’t connect without a security key, then you can enter that key via a QR code. You go to letsenvision.com/qr and you’ll be presented with an edit field into which you can type the Wi-Fi password.
What we’re doing here is getting around the facts that obviously there is no keyboard on the Envision glasses and you’ve got to enter the password somehow. You type in that password, it generates a QR code which contains the text of that password and you simply scan it with the Envision glasses. As we’ve seen, scanning a QR code with the Envision smart glasses is a piece of cake. That’s a one-off process. Once you’re connected to the network, the Envision smart glasses remember those credentials and you won’t have to do it again.
This could be useful in that iPhone personal hotspot scenario I was just talking about because if you’re going to use the app method, which I’d explain to you in just a moment, you obviously can’t be on the personal hotspot screen of your iPhone, which is necessary to make the network visible and be in the Envision app at the same time. This is one way around that. You can be in your iPhone, on the personal hotspot setup screen, the glasses will see the SSID, the network name at that point. You can generate the QR code and enter the password.
I think, though, that the most common way that most people would set this up is that you can also do it through the Envision app itself and you can go in and configure Wi-Fi in the glasses tab, in the feature preferences. You can enter the password there for the network and then the app because it’s logged into the same Envision account as your smart glasses, will send the information that the glasses need to make the connection. Is there a way to connect to your iPhone’s personal hotspot given the limitations using that method? Yes, there actually is.
If you’re confident that you know the exact way that the network name, the SSID is written, you can enter both the name of the SSID and the network credentials in the Envision app. This obviously requires you to be absolutely 100% certain about the exact name of the SSID, one character wrong and that’s not going to work. Of course, you also have to enter the password. If you know those details, then you can actually set your personal hotspot up that way and so then you are set up. It’s a snap ready to connect to as many Wi-Fi networks as you need.
In my experience, transitioning from our home Wi-Fi network to, say, the personal hotspot feature, it seems to transition nice and seamlessly. Once one network goes out of range, if it can find another one, the connection just takes place behind the scenes and you may not even be aware that a new Wi-Fi network is being used. A common scenario would be you have your Wi-Fi network at home, your office network at work, and your personal hotspot set up on your Envision glasses.
As long as you make sure that the personal hotspot feature is enabled when you leave the house, it’ll transition, when you get to the office, it’ll transition there as well. Given that Wi-Fi is in these glasses already, I look forward to the day, sometime in the future, where we have a generation of these glasses that has 5G built-in, that will really be sweet. I’m going to back out.
Automated voice: Search for Wi-Fi connections. Wi-Fi.
Jonathan: I’ll flick forward to get to the next item.
Automated voice: Bluetooth.
Jonathan: Now we have Bluetooth. The way I’m actually producing this is I have a little Bluetooth receiver that is connected to the Envision smart glasses and that has a headphone jack and that’s going into the mixer. That’s how we are getting the great audio that we are because as I said, at the time that I’m recording this, the USB audio feature is misbehaving. I’ll double-tap.
Automated voice: Disconnect from this device. One-finger double-tap to disconnect from your current Bluetooth device, two-finger single tap to forget the device.
Jonathan: I’m going to do that actually. I’m going to perform a one-finger double-tap to disconnect so you can hear what the speaker sounds like.
Automated voice: One-finger double-tap to disconnect from your current Bluetooth device, two-finger single tap to forget the device. Mike Lloyd.
Jonathan: That’s what the speaker sounds like.
Automated voice: Tara. AA2104. Bluetooth.
Jonathan: We should be back on that device now.
Automated voice: Wi-Fi.
Jonathan: There We go. Oh, that’s so much better, isn’t it? I’m glad we could do the demo this way. I have thrown several audio peripherals at this, including my little Bose Smartware device, a hearing aid streamer for my Oticon hearing aids, headphones, and obviously, this little Bluetooth device that is allowing me to bring the audio from the glasses into my mixer. It’s all been fine. It’s recognized them. The pairing process has been straightforward. If we go forward to the next item.
Automated voice: Language.
Jonathan: We can change the language of the glasses. I’m not going to do that at this point, but you can do that if you need to.
Automated voice: Display.
Jonathan: These glasses do have a little display that a sighted person can see or somebody with a bit of low vision can see. Let’s see if there’s an explanation of this feature.
Automated voice: Display. The display is located next to the camera and is projected on the glass prism. One-finger double-tap to toggle the settings for the display to be turned on or off.
Jonathan: I will perform a one-finger double-tap.
Automated voice: One-finger double-tap to turn off the display. One-finger double-tap to turn on the display. Swipe down to confirm your selection.
Jonathan: I presume we might get a little bit of battery saving by keeping the display off and it’s no use to me. I’ll go down.
Automated voice: Display.
Jonathan: We’ll flick up for the next menu item.
Automated voice: Software. You are currently on version 1.8.0. One-finger double-tap to check for new updates.
Jonathan: Oh yes, I like the idea of a new update. I’ll do that.
Automated voice: Do a one-finger double-tap to check if there are any new updates available.
Jonathan: Okay. [beep sound]
Automated voice: No new updates available.
Jonathan: That’s a shame, but they do update the glasses regularly. I’m sure that there will be an update soon. I’ll go back to the previous menu.
Automated voice: Software.
Jonathan: Then flick forward.
Automated voice: You are currently in time format.
Jonathan: This is self-explanatory, you can set the format of the time that is displayed.
Automated voice: Pairing mode.
Jonathan: If for some reason perhaps because of a reinstall of the app or something else, your Envision app has become unpaired with your Envision Glasses, you can put the glasses in pairing mode here and completes that pairing process. This avoids the need to start from scratch with the glasses.
Automated voice: Power.
Jonathan: Let’s take a look at the power menu. I’ll double-tap.
Automated voice: Put your Envision Glasses to sleep. Battery level is at 78%. Turn off the Envision Glasses.
Jonathan: There are other ways of doing all these things that may be more convenient. From the home screen, you can check your battery level, you can also tap the button to put the glasses to sleep and you can hold the button to shut down the glasses.
Automated voice: Power.
Jonathan: That is the last item on this menu. We’ll go to the previous one.
Automated voice: Device settings.
Feature preferences
Jonathan: We have another option.
Automated voice: Feature preferences.
Jonathan: Some of which are also available on the app if you prefer to configure them that way, but let’s take a look from the glasses what you can do in here. I’ll double-tap.
Automated: Voice commands.
Jonathan: As we’ve seen, voice commands are a very convenient way to get to functions on your glasses, will double-tap.
Automated voice: Hinge button is enabled. To disable Hinge button, do a one-finger double-tap. Confirm by doing a one-finger swipe down.
Jonathan: I don’t want to do that. I like having the Hinge button give me access to voice commands. That is the only option here at the moment. We’ll go down.
Automated voice: Voice commands.
Jonathan: Flick forward.
Automated voice: Instant text preferences.
Jonathan: Again, there are other ways to get to many of these preferences. For example, you could invoke the Context menu when you are on Instance text. Let’s have a look.
Automated voice: Offline mode is enabled. One-finger double-tap to disable offline mode, one-finger swipe down to confirm your selection. Keep in mind that offline mode might not recognize some non-Latin scripts.
Jonathan: That is not a problem that I’m likely to have. Offline mode is on, it’s definitely a lot faster. Of course if you are using the personal hotspot feature, and you’re using a data plan that has some limits, you’re saving some bandwidth and that’s always good.
Automated voice: Select recognition language. Currently recognizing in English.
Jonathan: That’s fine, that’s what I want. There are a lot of languages available. Let’s explore what they are.
Automated voice: English system language. Albanian. Bangla, Bangladesh. Bangla, India. Bosnian. Cantonese, Hong Kong. Katelyn. Chinese, Taiwan. Chinese, China. Croatian. Czech, Czechia. Danish, Denmark. Dutch, Netherlands. English, United States. English, Australia. English, Nigeria. English, India. English, United Kingdom. Estonian, Estonia. Filipino, Philippines. Finnish, Finland. French, France. French, Canada. German, Germany. Greek, Greece. Gujarati, India. Hindi, India. Hungarian, Hungary. Indonesian, Indonesia. Italian, Italy. Japanese, Japan. Javanese, Indonesia. Kannada, India. Khmer, Cambodia. Korean, South Korea. Kurdish, Latin. Malayalam, India.
Marathi, India. Nepali, Nepal. Norwegian Bachman, Norway. Polish, Poland. Portuguese, Portugal. Portuguese, Brazil. Romanian, Romania. Russian, Russia. Serbia. Sinhala, Sri Lanka. Slovak, Slovakia. Spanish, Spain. Spanish, United States. Sundanese, Indonesia. Swahili. Swedish, Sweden. Tamil, India. Telugu, India. Thai, Thailand. Turkish, Turkey. Ukrainian, Ukraine. Urdu, Pakistan. Vietnamese, Vietnam. Walish.
Jonathan: That is super impressive. The number of languages that are available for instant text. I do not want to change this, I’ll go back.
Automated voice: Select recognition language. Instant text preferences. Scan text preferences.
Jonathan: Next, we’re looking at the Scan text preferences. Let’s see what’s here.
Automated voice: Smart guidance is enabled, Layout detection is enabled. Language detection is disabled. Word detection is disabled.
Jonathan: We’ve covered all of those features as we’ve gone through and looked at Scan texts. We’ll go back.
Automated voice: Scan text preferences.
Jonathan: Go forward now.
Automated voice: Teach faces. By doing a one-finger double-tap on this screen, a notification will be sent to your phone which will allow you to teach your face on the Envision app.
Jonathan: That explains what that item does.
Automated voice: Add ally. By doing a one-finger double-tap on this screen, a notification will be sent to your phone which will allow you to add an ally on the Envision app.
Jonathan: I’ll flick forward.
Automated vice: Favorites. By doing a one-finger double-tap on this screen, a notification will be sent to your phone which will allow you to add and arrange favorites in the Envision app.
Jonathan: As you have seen throughout this demo, there are a lot of useful features and they’re organized logically into menus. You can use voice commands to get to many of the features that you want efficiently but there are times of course when voice commands are not necessarily appropriate. Favorites are another way to get to the items that you use most often. To set this up, we go back to the Envision AI smartphone app, we go to the Glasses tab and then-
Automated voice: Feed your Preferences button.
Jonathan: I’ll double-tap.
Automated voice: Envision ally. Add favorites. Add features to favorites and quickly access them on the Envision Glasses by swiping back on the Home screen button.
Jonathan: This is the one we want. I’ll double-tap.
Automated voice: My favorites alert processing. Add Favorites button.
Jonathan: I’ll go to the top of the screen.
Automated voice: Back button.
Jonathan: Flick right.
Automated voice: Favorites heading. Save button. My Favorites heading, Instant text.
[sound]
Jonathan: That noise tells us that actions are available that is a voice-over feature and I’d prefer that to having voice-over constantly telling me that actions are available. I’ll flick down and see what those actions are.
Automated voice: Remove from favorites. Drag item. Activate default.
Jonathan: Now that I’ve gone into this feature, there are two favorites that have been added, the Instant text and Scan text, but we can add other items as well.
Automated voice: Add Favorites button.
Jonathan: There’s an add Favorites button at the bottom of the screen. I’ll double-tap.
Automated: Close button.
Jonathan: Now I flick right, we’ll have a look at all the things that you can add to your favorites. Obviously, if you add too many things, it probably defeats the purpose of having the Favorites.
Automated voice: Select features heading Instant text, Checkbox button, checked. Scan text. Checkbox button, checked. Batch scan. Checkbox button, unchecked. Call an ally. Checkbox button, unchecked. Describe seen. Checkbox button, unchecked. Detect light. Checkbox button, unchecked. Recognized cache. Checkbox button, unchecked. Detect colors. Checkbox button, unchecked. Find object. Checkbox button, unchecked. Find people. Checkbox button, unchecked. Explore. Checkbox button, unchecked. Call an Aira agent. Checkbox button, unchecked.
Jonathan: I definitely want that one. I’ll double-tab.
Automated voice: Call an Aira agent. Checkbox button, checked. Voice Commands. Checkbox button, unchecked.
Jonathan: That’s the final item so I’ll go to the top of the screen.
Automated voice: Close button.
Jonathan: Choose Close.
Automated voice: Back button.
Jonathan: Now in my favorites.
Automated voice: Favorites, heading, Save button. My Favorites, heading, Scan text, Instant text, Call an Aira agent.
Jonathan: That’s good. What I want to do now is save those to the glasses.
Automated voice: Back favorites, Save button.
Jonathan: I’ll double-tap save.
Automated voice: Save alert processing. Alert Favorites saved.
Jonathan: Tremendous, and not that I’m doubting it for a moment but let’s go back to the glasses and verify that.
Jonathan: those favorites are now in effect. I’m going to perform a two-finger swipe down to get to the home screen.
Automated voice: Home.
Jonathan: If you didn’t have any favorites active, Home would be the first item on the menu, but because we do, we can now flick backwards.
Automated voice: Scan Text.
Jonathan: There are the favorites. Scan Text.
Automated voice: Instant Text, Call an Aira Agent.
Jonathan: They’re right there on the home screen. This is a way to speed up the workflow for things that you use a lot.
Help
There’s only one item on the main menu we haven’t looked at yet and it’s this one.
Automated voice: Help.
Jonathan: As you’ve heard throughout this demonstration, the Help that Envision has put together is absolutely fantastic. Documentation is so critical for this market. They’ve done a brilliant job. Not only is there a lot of textual information but there are YouTube videos as well. Some people do learn better that way. Let’s have a quick look at what’s in the Help menu.
Automated voice: Help. Smart Guidance Training, Gestures Training, Gestures Tutorial, Guides, About.
Jonathan: We’ve explored a lot of this as we’ve gone through, but I just want to show you how comprehensive the list of guides is. We’ll go back to Guides.
Automated voice: Guides.
Jonathan: Let’s take a look at what is here if I double-tap.
Automated voice: Scan QR Code Guide, Voice Commands Guide, Instant Text Tutorial, Scan Text Tutorial, Batch Scan Tutorial, Reader Tutorial, Describe Scene Tutorial, Detect Colors Tutorial, Find Objects Tutorial, Find People Tutorial, Explore Tutorial, Call an Ally Tutorial.
Jonathan: They have taken the time and trouble to ensure that you can come up to speed really quickly with these glasses if you’re willing to read or listen to the documentation.
Conclusion
That’s a wrap on our look at the Envision Smart Glasses. I’ve taken the time to produce such a comprehensive demo because the purchase of this product is a significant outlay for anyone, and that makes whether to purchase them or not a big decision. To make it worth the money, you would have to conclude that your quality of life would improve greatly because you have them.
That determination is, of course, a personal decision. The conclusion each of us reaches will vary depending on our lifestyle and preferences. When I began this review process, I was a bit skeptical because I’m fortunate enough to be on good terms with my iPhone camera. I usually get very good results from it, but integrating the glasses into my life for a couple of weeks, I’ve come to appreciate that even though you may have a powerful camera in your pocket, thanks to your smartphone, if there’s one thing many of us as blind people ran out of, it’s hands.
One of our hands is usually occupied by our mobility tool of choice, be that a white cane or a guide dog harness, and that leaves one hand free to help us locate and engage with doors and the world around us as well as pick things up and engage with any technology, particularly when we’re traveling. That may cause us to think twice about whether it is absolutely necessary to take out our phone and use the many capable apps on it to interact with the world around us, but wearing a camera at head level in the form factor of glasses changes the level of effort required to complete a range of tasks.
During the period I’ve had the Envision Smart Glasses to demo, I decided that I would wear them every day. I’d put them on in the morning, I’d use them as the need arose, and I found myself engaging with the visual world in a way I just wasn’t doing through my smartphone apps. For example, I could walk around an office building and get information from signage that I had no idea existed before. That said, I could better satisfy my curiosity about the world around me if the glasses did more things in a single mode, and I think that mode would most likely be Explore mode.
If there was going to be an impact on bandwidths or battery life as a result of Explore mode doing more things, perhaps what it does could be determined by the user through feature preferences. Personally, at a minimum, I would like Explore mode to also include the Instant Text feature, so I can be told about objects and people in my environment as well as signage. As I got into the habit of wearing the glasses day in, day out, I also found myself picking up random bits of print I might encounter throughout my day. Random bits of print that in the past I would never have given any thought to, even little print leaflets in stores or waiting rooms.
I thought that I would use Instant Text for this and perhaps I might if I just wanted to get a quick understanding of what the document contained, but when using the Scan feature, the Smart Guidance capability of the glasses makes taking an accurate snap of the document so simple. It’s probably just as quick to do that in the first place. The accuracy with which the Scan feature reads a wide range of printed text including handwritten text caused me to say, “Wow,” several times during my demo period. As someone who reviews a lot of technology, it takes a lot for me to be wowed these days.
Having taken a snap of the page, if something interested me enough, it was easy to save that to my Envision app for later reference. Wearing the glasses, having access to sight on demand whether that be through one of my adult children or a professional Aira agent was very useful, and it took me back to the early impressive days of Aira when I was exploring many unfamiliar locations with the Aira glasses. Only you can decide whether those things are important enough to you to warrant the significant cost. How significant? Well, if you buy direct from Envision, you can buy the glasses with the lightweight titanium frames for $2,499. If you want them with Smith Optics designer frames, that’ll cost you $2,795.
Envision have significantly reduced the cost of the glasses recently. That’s encouraging. They have payment plans available in some markets. Many countries offer government-funded programs that provide funding for blind people to receive assistive technology. The Envision Smart Glasses are hard to categorize because they can make a difference in both a personal and a vocational context. I can see there being some back and forth in some countries whether programs make a distinction between home and work use.
When you’re considering something that costs this much money and the fact that you’re purchasing a product that you expect to improve your quality of life, how you feel about the company is important. In this regard, Envision gives me plenty of good vibes. They remind me of the classic blindness technology companies of old. They are sourced well enough to keep the software updates cranking out and we can be encouraged by the continuous improvement the software for the glasses has received since the release two years ago, but they’re small enough to care about each customer.
The fact that you can request an online demo of the glasses from the company allowing you to ask questions and then receive a personalized onboarding if you purchase, makes you feel like you’re buying a designer premium product. The company also offers a 30-day right-of-return policy. It’ll be important for the company to maintain that sense of trust.
It will have a defining moment when at some time in the future, the company switches to a new generation of glasses, will there be a trade-in option for customers with the current hardware, and how generous might that offer be? I think though that we are likely to see plenty of worthwhile enhancements with the current platform for some time to come.
In the episode 194 of this podcast, Karthik Kannan from Envision dropped a few hints adding Be My Eyes to the cool menu would be a no-brainer. I understand a partnership with BlindSquare’s in the works. Since Instant Text is available offline for certain languages, my hope is that the Scan mode will eventually be available that way too, speeding up the process.
If there’s one big flaw of the product, for me, it is the lack of multi-user support. Clearly, that has no impact on some people, but for those in a defect, the lack of it could be a deal breaker. Two blind people living in the same house should not be expected to purchase two sets of glasses. Sharing an Envision account is a workaround of sorts that is problematic in some respect.
In short, the Envision Smart Glasses are a fantastic product. They’re intuitive, they do what the marketing says they’ll do, and they’re thoroughly documented. I don’t think you’ll be disappointed if you end up purchasing them.
[music]
Closing and contact info
Jonathan: I love to hear from you. If you have any comments you want to contribute to the show, drop me an email written down over an audio attachment to jonathan, J-O-N-A-T-H-A-N, @mushroomfm.com. If you’d rather call in, use the listener line number in the United States, 864-606-6736.
[music]
?Speaker: Mosen At Large Podcast.
[02:09:46] [END OF AUDIO]