Why is there still no llvm support for apple silicon? It's been over 4 years and swift has had hundreds of changes (the compiler still has basic issues with not being able to complete compilation, so it still needs work), but we have to agree, at some level, that it would be much better for LLVM to support apple hardware, and for apple's tooling to actually build with LLVM as the back end such that all kinds of compatibility for exploring best of class language use on apple silicon would be possible.
Post
Replies
Boosts
Views
Activity
Ultimately this problem just points out how detrimental to progress and usability it was to omit the need of some statement terminator, such as a semi-colon. Now we have a completely broken and incapable recursive decent parser that just cannot arrive at a usable conclusion on the appropriate problem line of code. So we can to waste 100s of hours of development time, each, trying to work around this nonsense of a development environment. It is literally 3 years later/after this ticket was first posted and this is still happening.
Lazy initialization implies non-concurrent access which will be another thorn in our side as Swift matures. There must be some form of planned, ordered execution that occurs in a predictable way so that we don't have to sprinkle lazy initialization around everywhere.
One of the primary problems I am confronting is in the use of SwiftData and making sure that there is at least one tuple in place for particular models that store and convey settings across the application.
The use of .modelContext() creates a lot of lazy initialization dependency problems because it doesn't really allow for the above initialization of at least one tuple to readily happen because of this:
Main actor-isolated property 'mainContext' can not be referenced from a non-isolated context
There seems to still be issues with how selection works, but in your case, once you designate a tag. the selection type needs to be Int to get that tag value, or at least that's what seemed to work for me based on a tutorial I've lost the link to.
The swift compiler, as a recursive descent parser is terribly slow at resolving broken code. It really needs to be replaced with an LALR1 compiler using lex/yacc with the swift syntax more fully described in the grammar instead of their being so much context sensitive choices that require the compiler to unwind and recompile from so many different levels to try and discern what is meant by the content.
Modern day compilers should be complaining about syntax errors much more quickly. Certainly named parameters make it a lot harder to "figure out" the polymorphic resolve of the exact method. But realistically I am having to wait 5min or more for some problems that are only reported as
"The compiler is unable to type-check this expression in reasonable time; try breaking up the expression into distinct sub-expressions"
The SwiftUI chaining of .onChange(), and other things with code in them create the biggest barrier here. Something different needs to happen here that moves the base syntax resolution to happen first and then have polymorphic binding result methods. That may already be what's being done, I don't know, but I just am spending day after day commenting out lines of code trying to find the syntax error and get a real error message. It's keeping me from being productive and pushing me to just stop and give up again on these apps I keep trying to put together but have to rewrite because each time I try again, I have a complete new set of APIs, language, libraries etc. to deal with.
I am working on a new app in iOS 17.2 on my iPhone. The location privacy settings for "always", "when running" and "description" all all in place. However, the settings for location access permissions only reveals "Never" and "When I share" as options. This seems to be where there is a problem. Another App I worked on before iOS17, but did change to use the beta, shows all 4 selections and I can move it to "always", and then my location appears in that app.
The ++ and -- operator removal is about as childish of a choice as one might find in language design. There are countless examples of how these operators are priceless.
let elems = pkt.split(separator: ",");
//$GPRMC,024613,V,3609.9338,N,09556.3937,W,000.0,000.0,160204,004.6,E*7E
if( gpsDataType == "GPRMC," ) {
let time = elems[0]
let warn_V_A = elems[1]
let lat = elems[2]
let latdir = elems[3]
let lon = elems[4]
let londir = elems[5]
let speed = elems[6]
let course = elems[7]
let date = elems[8]
let magvar = elems[9]
let dirs = elems[10].split(separator: "*")
let vardir = dirs[0]
let check = dirs[1]
let latitude = parseLat( String(lat), dir: String(latdir) )
let longitude = parseLon( String(lon), dir: String(londir))
self.latitude = latitude
self.longitude = longitude
self.heading = Int(Double(course)!)
self.speed = Int(Double(speed)!)
}
Why do I have code all these stupid indexes with constants? Why can I not use [idx++] on every index so that I don't have to worry about what line it is, and then for other GPS data, like $GPGGA, I can just copy the lines above that I need to use for that statement and not care at all what the indexes are.
Isn't the CGRect passed into draw the cropping rect for where drawing needs to happen? It seems like you should be drawing into an offline buffer and then just blitting from that drawing into the CGRect area.
This appears to be an order of execution problem. The issue is that there are two many pointers to too many things. the delegate pattern is creating lots of issues because I need to create a delegate object and pass the view object to it. Or, I have to make all the objects be delegates themselves. This greatly affects the flexibility of the software system design.
You need 'public' on those variable declarations
Separating out the details into smaller pieces allows customization and control through one source of truth as stressed in WWDC presentation of SwiftUI. This makes more of this reusable, and lets you start to understand whether you really want to customize so many things in so many places, or just have one place for that to be done.
Below, DisplayedView, is what you want on your ActionSheet. Keeping that as one "name" simplifies things. But, to customize that sheet view, you can still either replace DisplayedView with the specific sheets you need, or provide a parameter to it that further nests your customization so that you can pass in an array of model objects as jjatie showed, and then just have DisplayedView manage that. This allows you to use DisplayedView elsewhere with your model instances and not have it all tied up in nested content around your navigation view.
class GeometryData {
var textSize: CGFloat {
if UIDevice.current.userInterfaceIdiom == .pad {
return 48
}
return 23
}
var title: CGFloat {
if UIDevice.current.userInterfaceIdiom == .pad {
return 60
}
return 34
}
var height: CGFloat {
if UIDevice.current.userInterfaceIdiom == .pad {
return 0.15
}
return 0.15
}
var weight: CGFloat {
if UIDevice.current.userInterfaceIdiom == .pad {
return 0.44
}
return 0.43
}
}
struct BaseView : View {
var text : String
var geometry : GeometryProxy
var sizes : GeometryData = GeometryData()
@State var active : Bool = false
var showView : DisplayedView
var body: some View {
Text(text)
.foregroundColor(.black)
.frame(width: geometry.size.width * sizes.weight, height: geometry.size.height * sizes.height)
.background(Color.white)
.onTapGesture {
active = true
}
.sheet(isPresented: $active) {
showView
}
}
}
struct DisplayedView : View {
var which : Int
var body: some View {
Text("This is the \(which) view")
}
}
struct ContentView: View {
private var gridItemLayout = [GridItem(.flexible()), GridItem(.flexible())]
let viewCount = 4
var body: some View {
NavigationView{
GeometryReader { geometry in
ScrollView(.vertical, showsIndicators: true) {
LazyVGrid(columns: gridItemLayout, spacing: 18){
Group{
ForEach(0..viewCount) { item in
BaseView(text:"View\(item+1)", geometry:geometry, showView: DisplayedView(which:item+1))
}
}
}.padding()
}
}
.navigationTitle("Title")
}
}
}
This still doesn't cover the problem of how do you get the location touched on the map. There are missing parameters to the onGestureXXXX() chained calls. These should all have lat/lon of where the gesture occurred at on the map.