Hello.
I am a very new developer but I'm trying to make an AR app for iPhone and iPad. I was able to deploy a very simple app with Unity that basically just opens the camera and displays a 3-D model in AR - however the cord was attached to the phone. I need to be able to walk around in order to really experience the AR. I've been trying to deploy an 'ad hoc' app on iOS for the past few hours to no avail. I noticed there are several options for deployment so I'm wondering what is the best if I'm just looking to develop and I want to see the changes that I make on my iPhone and iPad.
I don't need my app to be available on the App Store.
Another question I have is about compatibility with the latest Xcode I have which is 12.5 and my iPhone which is running IOS 14.4.2. I would think since they are both new it would be fine but I watched a YouTube video and it said that the decimal points after the first number should be the same.
Thank you for your help.
Post
Replies
Boosts
Views
Activity
Hey all, I am able to view my AR app on my phone when it is connected to the computer via cord now but I'm trying to run the app on my phone over the network and it's saying 'build failed' right away. Here is the error message.
"Unity-iPhone Tests" requires a development team. Select a development team in the Signing & Capabilities editor.
However my name is selected under team under signing and capabilities. So I'm stumped. Any ideas? Thank you!
Hello Apple devs,
I have been making an app with Unity to view houses in AR and I am using an image target to instantiate and position the model on site. I am wondering if the mesh that the device generates during an AR session can be saved and then used as a reference for the next AR session. So, with each AR experience the reference model would grow and grow. In theory, after the first AR session on site, the image target would be unnecessary anymore and the device would localize itself by matching what it sees with the meshes from previous scans. This is a similar concept to what Vuforia is doing with area targets except the runtime scans would constantly be being added to the reference model with each AR session.
Hope that makes some sense. Here is a video of us doing an AR walkthrough to provide context. Thanks in advance.
https://www.youtube.com/watch?v=kzeVVXaKUzM
I am an architectural draftsman and I am developing an AR app so clients can view architectural designs. It will primarily be used by prefab house manufacturer's clients so they can visualize these houses on their property. I have hired a remote developer to help me develop some tricky things in Unity and currently he has the app running on his account and is using TestFlight so I can try it on my phone. My question is how should we go about deploying the app once its done? Is there a way he can deploy it under my business name? I don't want to give him my Apple passwords. Or should I ask him to send me a certain file so I can deploy it? Or maybe I should create a repository and he just pulls and pushes to that and I compile in Xcode? If I sound inexperienced it's because I am. I've built and managed to get one app onto TestFlight before so please pardon my naivety. I just want to make sure I set this up the right way. Is there any way he might suggest to do this that I should avoid? I'm assuming he may suggest to deploy the app with his own account which I would think I'd want to avoid right? Thanks in advance.
I am trying to view a house I designed in AR at 1:1 scale and it needs to be location specific. So I think I need to use an image target to coordinate the positioning. How would this be done? I've heard to use reality composer but I think this is only on ios and I use a mac. Do I need to use xcode? Can I avoid Unity? Thank you.
Hoping someone can help me with this. I'm trying to create an AR experience for a prefab house builder. I've made a video showing my reality composer screen and viewing the files on iphone viewable here:
https://www.youtube.com/watch?v=7VsBxxnw3pE
I exported both a .reality file and a .usdz file because I wasn't sure what was the proper way.
The first thing I'm showing on my iphone is the .reality file. You can see at first it has issues placing the blue hexagon shape but then locates correctly after I move the phone left to right. The main problem here is that it doesn't bring in the house at all. You can see when I switch to object view that only the blue hexagon is present - no house. The other problem is that the model loses its tether to the image target as soon as the image is out of view of the camera. This obviously wont work for something like a prefab house walkthrough because people want to walk all around and through the house on site.
Now for the .usdz export. As you can see the house is present in object view although it is tipped on its side. I believe I can fix this issue with the Revit exporter. It did bring the house in like this in reality composer but I used the rotation tools to make it sit flat and orient it with the image target. The other problem is that in AR the image tracking doesn't seem to be working at all. I did notice a slight vibration when the phone saw the image but no blue hexagon or house was visible.
For some background I modeled this house in Revit and exported it as a usdz file using a plugin. I'm running reality composer version 1.5 in xcode. I'm trying to develop a procedure for this so I can do it for clients often. Please help! Thanks.
Is anyone else having trouble with image targets in reality composer? When I move my phone directly over or in very close proximity to an image target, my scene shows correctly. But as soon as I move my camera away the scene seems to jump to some arbitrary location in the room. When I move back to the target it comes back to the correct location shortly.
I'd like to bring a prefab house into reality composer and the model has a foundation that needs to be below earth level. Reality composer seems to be locating the bottom of all objects to the surface of wherever you are but this of course doesn't work for a house with a subterranean foundation. Is there a way to force it to be 2' or so lower?
link to video showing problem:
https://youtu.be/XkmTYczgwQo
I have a usdz file of a prefab house and I'm trying to move an object on the site to the 0,0,0 location. The object represents a peice of paper you would set on the dirt like an image target. Anyway when I do this the model seems to explode when I do a usdz file export.
Thanks for your help.
It's annoying that my image anchored scenes are migrating as soon as the image goes out of frame. Wondering if theres a way to make an image target anchored model stay put as a user moves far from the image anchor. Say 100 feet or so. I'm using this for house sized models.
I have been doing some AR work bring my houses from Revit Architecture into the real world but now I'm interested in going backwards. Bringing a scanned mesh of the world into Revit. Does anyone know if you can capture a mesh during an AR session and save that to the device? I'd be looking to bring that back into Revit.