Seismometer

Some variation between this lesson and the previous one 😉

Section 1 like in the previous lesson, we start in the MotionDetector class. Everything is the same with regard to the private properties and the initializer. Discrepancies creep in at the start() function on line 24. What they did was switch UIDevice.current.beginGeneratingDeviceOrientationNotifications() and starting the orientationObserver with checking motionManager.isDeviceMotionAvailable and starting our Timer. Our three @Published properties are given values and observed when updateMotionData() is fired, with the closure onUpdate() being run at the end. Roll & Pitch are updated using our custom extension at the bottom of the file, and zAcceleration is updated from the sub-property of the property of our data constant. The stop() function has the motionManager cease all activity, ends our timer from running, and switches off our orientationObserver notification. The closure deinit runs our stop() function when the system is ready to dismiss our motionDetector class. If we do not do this, our device will continue to monitor motion data, using up system resources and wasting battery.

Section 2 our NeedleSeismometer measures vibrations using a needle. It focuses on the zAcceleration property of our motionDetector in response to the device’s movement. The MotionDetector object is in play as our EnvironmentObject. The three properties that follow are the needleAnchor, the amplification constant, and rotationAngle. Within our body, we have a VStack that embeds a ZStack. Our ZStack layers the content bottom to top, so our GaugeBackground is laying flat like a dinner plate. The rectangle is actually our needle- thin (width of 5) and it wobbles as our device moves. The rectangle has an overlay view modifier, which carries a VStack, inside of the stack we have a Spacer pushing everything after to the bottom. Finally we have a Circle meant to ‘pin’ our needle to the bottom as an anchor while it moves. The real anchor is our needlePoint property, it tells our Rectangle to stay at one point (UnitPoint needleAnchor).

Section 3 we examine the GraphSeismometer view. At the top, we have our MotionDetector, the EnvironmentObject. Below this, we have our State private variable array of Double, named data. It is initialized to be an empty array. Under this, we have a constant value maxData which is set to 1000. The constant maxData is meant to be a cap for length of the array data, where once we have 1001 entries, it will remove the first entry in the array. Then we have the private State variable sensitivity. The more ‘sensitive’ (precise) this variable, the more detailed with higher and lower reactions to movement of the device. graphMaxValue is a computed property, this takes our three previous properties: sensitivity, graphMaxValueMostSensitive, graphMaxValueLeastSensitive and does math to figure out how our graph will be drawn on the device. Below this, graphMinValue is the negative value of graphMaxValue.

LineGraph is the custom View that reflects our device’s movement & movement history. I see it and think of those cop shows with the polygraph test 👮‍♂️. Its parameters are the data, maxData, (remember, this keeps track of how many entries our array is supposed to store), minValue and maxValue. Beneath this view, we have five modifiers: clipped, background, cornerRadius, padding & aspectRatio. AspectRatio is funny- when we change the value to greater than 1, we get a wide (horizontal) rectangle. Change the value to a decimal between 0 and 1, we get a thin (vertical) rectangle. Under this view we have our Slider, which leverages our sensitivity @State variable to adjust how our LineGraph details movement. Under the Text, we have our onAppear view modifier, and inside this closure, we are observing responses from the detector onUpdate closure.

Section 4 we dive into the SeismometerBrowser, the base View of our app. From here we navigate to NeedleSeismometer or GraphSeismometer, both embedded in a List, which itself is embedded in a NavigationSplitView. To direct our users to the actual Needle or Graph, we have NavigationLink, which behaves as a Button. Inside the link, we are drawing our view with the HStack, VStack, Image & Text components. At the top we have the @StateObject detector, which is our MotionDetector that will be passed down to our subviews Needle & Graph. Below we are calling the motion detector object in the .environmentObject modifier. Beneath that, we have .onAppear() & .onDisappear, which toggle starting and stopping for the motion detector.

For Section 5, just as in the previous lesson, we have the Double extension. This is here so we have a reusable function that formats the digits we show our users.


Find the conformance to the Observable Macro here

Standard

Bubble Level 🫧

This demo got me excited 🫧

I started by duplicating the app so it wouldn’t be a Swift Package, instead a plain old vanilla app.
Moving around with the device was fun, observing how the ‘bubble’ moves in response to how I tilt and roll my device. Let’s dive into how this lesson is introduced by 🍎

Starting off, we are in the MotionDetector class. We get a brief introduction to the library CoreMotion. This handles how we hold the device, which orientation it is held in, a pedometer, etc. Next, we discuss CMMotionManager, an object that manages motion services, for example: accelerometer & gyroscope data.

Step 3 shares with us the Timer class. Timers help your app measure when to fire (or not fire) a certain method or action. In this case, our Timer updates the pitch, roll & zAcceleration (yaw) value for our user to read.

Then we are going to store measurements for the pitch, roll & zAcceleration on our device for the user to read. In the demo we see they are Published properties, which means when the value changes SwiftUI will immediately observe them and update itself.

Step 5 references onUpdate, which is a closure. Documentation states that is Group code that executes together without creating a named function. We can pass this around like a variable and inject it into parameters in functions. In this app, our closure is empty but if we wanted, we could place something like a print statement there to run to the console with the message: ‘Updated

Following is our start function controlling MotionManager & the Timer properties. Step 7 checks if the device can actually measure movement (what if this app were run on a Mac mini? 🧐) Next we clear the check, our device can measure movement so we want to start measuring motion updates. After that we configure our timer property to repeat and run the function called inside of its closure.

Step 10, the function we run in the closure mentioned in Step 9, is reviewed in detail. Our properties are updated here if our conditional statement is satisfied. If we get data from our motion manager, the roll, pitch, & zAcceleration properties are updated then it calls our onUpdate closure. Step 11 highlights the conditional statement inside the updateMotionData function: if we aren’t receiving data from the motionManager object, we skip everything inside the if statement. On step 12, we learn why there is _ in at the end of line 38: it is the start of our closure, and we are not capturing an object, so instead we use an underscore ( _ ) and we free the memory that would be used to store & use said object.

Steps 13 to 18 we dive into the function which is called on line 39, updateMotionData(). Desktops, laptops or Mac minis would not be able to continue after line 47. They lack the appropriate sensors. iPads & iPhones can run this. We will check if we can store the data from our motionManager instance running its deviceMotion method. If so, we are moving on to line 48. We have here our tuple which will store our roll & pitch from the currentOrientation property. The result of our if statement, data, will be inserted as a parameter to the currentOrientation property, using data’s own property ‘attitude’. Step 16 describes what data.attitude is responsible for. Next we learn about the data.userAcceleration property. This is a piece we will review, it is an extension on UIDeviceOrientation. On line 49 we have our @Published property zAcceleration which will capture the value passed from userAcceleration.z property. onUpdate, if you will recall, is our empty closure. We call this like a regular function, with two parentheses at the end.

Steps 19 & 20 stops monitoring our motion sensors and turns our timer off. We are also removing our observer and making the orientationObserver property nil. In the final step, we are releasing the memory stored from this object when deinit is called. Calling deinit will in turn call stop() which turns off our motionManager as mentioned in steps 19 & 20.

At the very bottom, from the extension UIDeviceOrientation, we are using the sensors on the device to return specific orientations from the custom function adjustedRollAndPitch. This has a parameter which takes in a CMAttitude, which will return a tuple of roll & pitch, but if we wanted we could return roll, pitch & yaw values.

In Section 2, we pivot to a new file: OrientationDataView. This View will display left & right tilt (roll) or forward & backward tilt (pitch). The MotionDetector object is used here as a property by using the @EnvironmentObject macro. Since this is an ObservableObject, when the device moves & our detector object updates its values, the new data will propagate and SwiftUI will change the views to reflect the new value. Two properties, pitchString & rollString, both pass a Double value using a custom extension: describeAsFixedLengthString().
Horizontal reflects the roll, and vertical is the pitch. Inside the body, we have two Texts, one for Horizontal & the other for Vertical. These values update when the detector property updates its pitch & roll values. We set the display to be a system font with a monospaced design, that way all numbers & letters take an equal amount of space per letter.

Section 3 we check out the BubbleLevel file. Still observing the MotionDetector object with the @EnviromentObject macro, our view will update whenever the values on the MotionDetector change.
We have here three Circles: first large grey created at the bottom, next the accent which will respond to moves, finally the crosshair circle that stays centered on the grey bottom circle. Steps 4, 5 & 6 cover the constant properties range & levelSize. Range is defined as Double.pi, this has the system pass its approximation for π. LevelSize is the constant for the actual size of our circle, so we can reuse that value in many places and not get confused. BubbleXPosition & bubbleYPosition control our accent-colored circle. The heavy lifting is done inside of each property, so when we reference the property, we get that calculating done automatically 🪄. The next properties, verticalLine & horizontalLine will be reused so we will store templates here. Inside the body, we have our bottom Circle. This has the largest size (levelSize from earlier), and its foregroundStyle is set here. Beneath these settings is our overlay, which contains information for what needs to be drawn above our first Circle. Introducing the ZStack which will layer our views on top of each other like a cake 🍰. The next circle is our accent color Circle, the one which moves responding to user motion. Finally our crosshairs Circle is drawn over that. VerticalLine & HorizontalLine is used to make our crosshairs. The verticalLine & horizontalLine pair are used again for the large grey circle, this time we reuse levelSize constant for placement.

Section 4 is short. Here we learn how to expand the Double type using an extension. We will take our motion updates from the MotionDetector, which returns a Double, and format them so we have a fixed number of two digits: tens, ones, and two fraction parts, tenths and hundredths, as a String. The formatted function is used by not only Double, but other types in Foundation, via the BinaryFloatingPoint protocol. This is also where Double.pi come from deep in Double extension, but I digress. 🤓 We want the value represented as a simple number, so we use the .number function. Next comes the .sign modifier. We always want to display a (+/-) to denote positive & negative values. Below that is what I feel is most significant, the .precision modifier. This is what keeps our value formatted at +00.00, -00.01, etc.

Here’s a GitHub link of this app updated to conform to the Observable protocol

Standard

Meme Creator 🐼

This lesson got me creeped out by Pandas 🐼 😂 😭

This is all likely to change soon as we have the Observable macro for iOS, iPadOS, Mac Catalyst & tvOS 17+, macOS 14+. Let’s just take a moment and assume our deployment target is under that threshold iOS 17. At the bottom I will link guides to migrate off ObservableObject to the Observable macro, with a link to how I would have this app conform to the Observable macro

Let’s break down what’s going on:
We start with the @StateObject PandaCollectionFetcher. PandaCollectionFetcher is an ObservableObject which emits a signal when the @Published properties are updated.
Next, we tell MemeCreator (our base view for the app) to pass in the fetcher, our object representing PandaCollectionFetcher. This is an @EnvironmentObject; defined as ‘A property wrapper type for an observable object that a parent or ancestor view supplies. Using the Observable macro requires you to use a different property wrapper, @Environment.


Next up we have our Panda model, called Panda.swift. This struct conforms to the Codable protocol, which allows our type to be represented in and read from JSON. Below our Panda struct we have PandaCollection, which also conforms to Codable, this time we see the property sample, and this is here to indicate the key corresponding to the value of Panda objects in an array.

Section 3 is where things get interesting. We have a PandaCollectionFetcher which is the star of our show, the ObservableObject. There are two Published values, imageData & currentPanda. The name ‘imageData’ is a bit nebulous (for me anyway) it actually represents all the data in the array of our PandaCollection. We start the collection off with our default Panda, which was created in Panda.swift.

Scrolling down we see the fetchData function. Our function here is special because of the two markers async & throws. Let’s take a moment to breakdown what is going on under the hood, async first then throws.

When our app runs, commands behave as though they are cars on a single-lane road. Processing would behave like a traffic light- stop & go. While one function is being executed, the ‘traffic light’ turns red and nothing else moves forward. When the function is completed, it🚦 will turn green and the next function is worked on, or drives to the intersection to be processed. Here we have async, which allows our ‘road’ to behave more closely like a highway. A function with this designation will move into a new lane and unblock other functions so they can get executed on another processor. Think of our UI as only operating on one lane (@MainActor). If too many “cars” (functions) are on this one lane, there will be a traffic jam- your app will visibly lag. This article on Swift Concurrency covers this topic, the information here is detailed and beyond the scope of this post.

Error handling took me down a bit of a rabbit hole 🐇 🕳️
When we are reaching into our database for some information, or if we are doing a network request, things can go wrong. We need to have a way to notify us of what went wrong. Error handling helps us in that regard, by creating a function that will throw, the throw helps us fail gracefully. We can send the user a popup telling them that the data is bad, or the network is unavailable, so they know more than just ‘this app isn’t working’: they can understand the why.
This article on Swift Error handling is complete, yet one question nagged me. Can an initializer throw? This post on the Swift Forums answered my question.

Hang in there, gang! Almost wrapped this up 😀
The next keyword I’d like to discuss is Sendable.
Sendable relates to concurrency, as the first statement emphasizes:

‘A thread-safe type whose values can be shared across arbitrary concurrent contexts without introducing a risk of data races’

There are rules for using this keyword, but in our context we are applying it to our view ‘Meme Creator’ and it is there because of the PandaCollectionFetcher. The PandaCollectionFetcher is doing some asynchronous work that the view relies on, so we mark our view with the keyword Sendable.

Now we come to EnvironmentObject. This is our good friend, the property wrapper that observes an object for changes.

Toward the bottom of the file we see the .task modifier, which will run any commands inside this particular view. We want to use it when we have some work that will take us off the main thread (good place to use functions that require the keyword await). This goes back to what I mentioned above- we are changing lanes on our highway to let the UI update while we do what we must, then ‘merge’ back when it is time for us to wrap up. Also recall try is there if we could throw an error.

Next we have our LoadableImage which will display the photo from our URL. I am not sure why they did not add the caption also- I think that would have been cooler than to let the user type their own.

They close by showing the isFocused boolean, which displays a cursor so you can type your own message in the meme. Lastly, they have a Slider to change the font size for what you wrote, and a ColorPicker to change the font color.

For iOS 17, macOS 14 & later:
Manage Model Data in your app
Migrate to the Observable Macro

I updated the app ‘Meme Creator’ to adopt the @Observable macro here with a GitHub link:
https://github.com/FullMetalFist/PandaParade

Standard

Grids

I’m from a place famous for grids… and Gridlock 😂

This is a quick & sweet lesson, sharing how we can create a grid. They do a little refresher on the purpose of @State. It is the memory bank for the last selected color.
The following lesson, Editing Grids is similar; they mention NavigationStack as a way to navigate through a hierarchy of views. Inside the for each loop, we use NavigationLink for a transition to our detail view.

Image Gallery lesson is even cooler– they share a bonus of how to use AsyncImage. All this looks easy as we just add a view modifier and most of our work is done. In the past we would create our own extension with a switch that could determine how to display the photo if available or show a loader view, or a ‘no image’ stub view.

They close the Grid category with this lesson. I feel like it is very strong, and it should have been one of the first we were presented with but hey 🤷‍♂️ we’re here now and it’s good. Thank you again SwiftUI Sample App team 💪

Standard

Set the Date

I’m bursting with enthusiasm over this next lesson, I’ll detail the reason as we follow through here:
The sample app here is a Date Planner. Starting out they mention NavigationView. This is deprecated yet for this project, it is fine. If you are starting a new app **on iOS 16 or better**, use NavigationStack & NavigationSplitView instead. This is what presents a stack of views over the root view in SwiftUI.

Looking at the App Preview on iPhone mode, we can note the text does not appear. Switch that preview to an iPad to see the text in the center:



Anyhow, this lesson got me excited on a few different fronts. First off, this is our first time experiencing @ObservableObject. This is a Protocol which allows for the ability to publish updates to anyone listening. Now we have properties which we publish. The view in SwiftUI listens for changes and updates the content. I like that they slowly introduce the idea of having the class with this property outside of our view.

Standard

Choose Your Path Book

Does anyone remember those ‘if you open the door, turn to page 5; if you walk down the path, turn to page 11’ style books? I sure do. Reading them consumed a big chunk of my youth before I got a game console, maybe someone else out there in Internetland had a similar experience?

That’s related to what Apple is suggesting for us here– look at the ‘choose your story’ book that they made. They used the idea of a baking contest. They break down where you can modify it and change it to make the story your own. I think that was really cool, also a fabulous app idea. If you have a story, they share how you can make your navigation stack dynamic. You may have your ‘journey’ on the app, on the device as they do here or feed the app data from a network request. Thank you for another fabulous tutorial, 🍎 🥳

Standard

Let’s chat About Me

The first sample is a demo of the Tab View in SwiftUI, titled ‘About Me’ 🤭
Before we dive into that, lets discuss the format of how it is presented & why it is important.

This is a list of all our Sample App projects. You notice each file extension is ‘swiftpm’. They did this so we can run the sample on macOS & iPad using Swift Playgrounds. I would prefer if all the samples had this option, where appropriate. Anyway, you can use more than Xcode to create demos.

TabView {
    MewView()
        .tabItem {
            Label("Info for MewView", systemImage: "cat")
        }
    SomeView()
        .tabItem {
            Label("Info for SomeView", systemImage: "book")
        }
}

Now- I don’t know if you were wondering this, but I thought- can this become like Inception? 😴
That movie comes to mind as we had folks going to have dreams inside of dreams- Can we stack TabViews too? The honest answer is: We should not. Apple developers & designers should be familiar with Human Interface Guidelines.

‘YourData.swift’ in section 2, they introduce the idea of separating concerns: our info is stored here & the other user interface files reach into here to display content. That’s usually the way things are done, whether we have data stored on disk, or we are reaching to the cloud to pull it down; the user interface will behave as a template for ordering this information we get. From sections 3 – 6, they just break down the content of each tab, reinforcing concepts of things like VStack, Text, Image & ScrollView

Standard

Swift-dot-org

Going with the sample app on Swift.org, we construct a small activity-suggestion app which helps users decide which sports to play. I feel this tutorial was succinct, which I appreciate.
We built a new app, started tinkering with sizing (Circle) then right away modifying our object by setting the color, padding around & an overlay. We learned about these in Landmarks, however it was not done this quickly. This lesson is much smoother, and there is still more yet:

The tutorial introduces the idea of having ‘cards’ which would highlight the sport on top, text on the bottom, we would cycle through the selection like a rolodex. From here they introduce the @State property wrapper, giving our view struct eyes on this property and to observe it for changes. When this State property updates, so does the view. We also use the State wrapper for the new property id, which allows us to ‘swipe right’ each time we tap the button. The guide also leads us into why we need State, as they kind of share what happens when we try to update a property the view relies on with out making it @State.

I did enjoy this one after feeling as though it was trench warfare in the last set. Don’t get me wrong, SwiftUI Concepts can be fun, yet it feels like that team had a larger budget with less information that stuck to me than this ten-minute tutorial on Swift.org. There are other tutorials on Swift.org, they cover console app, a library, a web service, & embedded app for a microcontroller. Depending how I feel maybe I will check on them once SwiftUI is wrapped. Thank you Swift.org team! 💐

Standard

Custom Binding, part 2

It took so long for me to get the time to successfully focus on the task, I decided to split this topic into 2 parts, and get phase 1 out so I can articulate my gain from what was read.

Where we left off was the final lesson of the second tutorial, and I mentioned I wanted to dive into docs.
Starting off at Binding, I had read it before but so what if this is the eighth time. This time I paid a little more focus to the area where they say it is Mentioned In: – sure search? They repeat search three of the four times. I didn’t attach significance to this before. Started reading the first link and imagining- a search bar. How is it responsible? Making some activity happen, such as a search or maybe a filter, which will probably change some other view. That makes sense.

Further down, docs state we require storage in the form of a String to perform the search. This is where we have our first Binding in the form of ($). Properties are still @EnvironmentObject & @State but this data we are sending to DepartmentList & ProductList will end up changing those views. I hope I am understanding this correctly.

struct ContentView: View {
    @EnvironmentObject private var model: Model
    @State private var departmentId: Department.ID?
    @State private var productId: Product.ID?


    var body: some View {
        NavigationSplitView {
            DepartmentList(departmentId: $departmentId)
        } content: {
            ProductList(departmentId: departmentId, productId: $productId)
                .searchable(text: $model.searchText)
        } detail: {
            ProductDetails(productId: productId)
        }
    }
}

The next link used nearly the same substance as the first, I would have expected the examples to be different 😅
Third link discusses activating the search link. I’m a little searched out, I want to dive into Binding.
Fourth link is my favorite as it has this graphic:

OK- so Our view, on top, has a @State or @Environment property, our subview which has the Binding takes the $ sign. Inside this view, this property would be @Binding and the data has the potential to change the whole view. Does this make sense? Anyone agree or disagree with my theory of Binding? Feel free to email me at info<at>mvilabrera.com and let’s chat! It took me awhile to grasp this so if I am getting it wrong, please let me know. 🙂‍↕️

Now I feel like I got heaps out of reading all those docs, yet the tutorial feels flat. I’m at an impasse, it is self-imposed. We have SwiftUI Sample Apps, which is nice, they are teaching stuff we didn’t work on such as networking, gestures & machine learning. What I do not like is, lessons are pre-formed like it is here. 🫤 Then there is Scrumdinger, which is an app that helps organize meetings, with a timer & a bell. Apple hasn’t really linked to this so… idk- but I like this because we are creating a new app & creating new files as we go. Develop In Swift is also there, they cover Swift Testing, Swift Data & visionOS. I can’t decide which to start with and explore.

Standard

Custom Binding, part 1

I was hesitant to post the prior entry- so many unknowns for me about SwiftUI. It was scaring me how I had confusion with the breakpoints, this lesson wrapped things together for me so I will explain where I went wrong and explain it here:

This lesson introduces the idea of a custom Binding. This is my first time learning about this idea and how powerful it can be. In the sample we have been using for the past few lessons, we move to DetailView for starting out. On line 8 we have a private var recipe which is used as a ‘source of truth’ and- from this view we are pushing this truth to other views. Does this make sense? We are also loading getters and setters– which explains why that ’emptyRecipe’ extension function kept being called. It is happening right there- on lines 11, 13 & 16. We are modifying our recipe directly from DetailView. What if we had hundreds of recipes to work with?

Seeking to tinker with this idea, I duplicated the number of recipes in the JSON, incremented the id count for each, and re-ran the program. We are running DetailView for each item in our list- even though for loading views (or anything else) we should load only what we need. Is it possible to create a better example, one where we can better grasp using @State, @Binding, or Binding<T>? The example presented is interesting. I am curious architecturally, is there a setup that could be safer, load only as needed and be able to be reproduced on demand? My head is kind of spinning, I can’t get over the fact that as of this point, I don’t think I want to do things this way. Maybe I’d just need a better example of doing this right to increase my comfort. Even when falling back to the prior view, DetailView is loaded again a few times (I counted six).

The lesson closes with mentioning the difference between having a computed property Binding– to me the description is cryptic.

The issue I have with the statement is the bottom. “… the dollar sign ($) prefix tells SwiftUI to pass the projectedValue, which is a Binding”. Okay- does that mean @Binding where we are still using the $ prefix? Off the strength of this tutorial, I’d feel better just diving into documentation after having the code seemingly bounce everywhere and leave me dizzy. So that’s where I’m going.

Standard