Hello, The age standard in China is also 14 years old, and I am also 13 years old. Therefore, I contacted swiftstudentchallenge@apple.com for assistance. They informed me that I could request my guardian to send an email to s wiftstudentchallenge@apple.com to apply for individuals under the age of 14 to participate in the competition. Consequently, you do not need to wait. You can request your guardian to submit an application to swiftstudentchallenge@apple.com.
However, you mentioned that the age rule in South Korea has been synchronized with the international one, which is 13 years old. If you believe that Apple has made an error, you can contact swiftstudentchallenge@apple.com to request that they rectify the rule.
I hope this will be helpful to you!
Post
Replies
Boosts
Views
Activity
Certainly, you can utilize the features and technologies associated with Metal.
However, I would like to emphasize that while I am not proficient in Metal, I possess a certain level of understanding of the technology. To the best of my knowledge, Metal appears to be incompatible with virtual machines(Simulator) and necessitates the execution on a physical machine. Nevertheless, I am uncertain about this assertion. If this is indeed the case, please ensure that you request the reviewer to utilize Playground for the execution of your application (there is a specific option available in the form). This option should be chosen instead of Xcode, as the execution of your application using Xcode will result in the utilization of a virtual machine. However, if Metal is indeed compatible with virtual machines, please disregard this paragraph.
I trust that this information will be beneficial to you.
As mentioned in the rules, you can use third-party code and resources, but you must meet the following three requirements:
When submitting, you must declare where and why you used other people's code and resources.
You can't infringe. You must make sure that the code and resources you use are available to everyone. ( The two points you mentioned are likely not to infringe)
Most of your content and core content should be developed by yourself, and third-party code and resources can only be used as an auxiliary.
But I can't completely make sure that it's right, but it's probably right because I've summarized it based on complete terms and experience. I also decided to do so when I submitted it. I hope it can help you.
Due to the regulatory framework and policies of China, the introduction of CallKit to the Chinese market presents significant challenges. CallKit offers functionalities akin to caller ID and Voice over Internet Protocol (VoIP), which involve the management of sensitive data such as user call records and caller ID. Such functionalities require approval from relevant Chinese government departments. Consequently, CallKit is currently inaccessible in China due to legal restrictions.
However, I have recently received information that some WeChat users have received notifications for CallKit. I anticipate that CallKit will be made available to the general public in the near future.
由于中国的监管框架和政策,将CallKit引入中国市场带来了重大挑战。CallKit提供类似于呼叫者ID和互联网语音协议(VoIP)的功能,这些功能涉及用户呼叫记录和呼叫者ID等敏感数据的管理。此类功能需要中国政府相关部门的批准。因此,由于法律限制,CallKit目前在中国无法访问。
然而,我最近收到的信息是,一些微信用户收到了CallKit的通知。我预计CallKit将在不久的将来向公众提供。
您可以在 反馈(https://www.apple.com/feedback/) 中报告此问题,我相信Apple会解决和纠正它。
You can report this issue in Feedback (https://www.apple.com/feedback/), and I am confident that Apple will address and rectify it.
The potential issue lies within the chip. Kindly verify whether the chip of your Mac is an M2 or a higher-end chip. It is possible that you have an M1 chip. The M1 chip may not undergo comprehensive training, but a progress bar will still be displayed.
RealityView, a high-level framework specifically designed for visionOS, can also be utilized on iOS. However, RealityView offers a simpler and more aligned approach to visionOS.
While ARView may present some challenges and is not compatible with visionOS, it still holds advantages for iOS app development. Notably, ARView provides access to functions that are exclusive to it, such as face tracking, body tracking, geotracking with anchor placement based on latitude and longitude, object detection, app clip code detection, and video frame post-processing.
In summary, for iOS app development, I recommend utilizing ARView due to its broader content support. However, if you intend to develop visionOS apps, RealityView is the preferred choice.
At the same time, you can also refer to here
Take a look at this. I hope it will be helpful to you!
https://developer.apple.com/documentation/xcode/running-your-app-in-simulator-or-on-a-device#Connect-real-devices-to-your-Mac
I’ve got this! When you see this message, it means I’ve fixed the issue. You don’t need to reply to this question (if you have any great suggestions, feel free to share them). Thanks a bunch for checking in!
I suggest you use DispatchQueue to control the start time of the command, and use commands such as .move to control the action of the object.
In This Enumeration Code, When The User Meets Some Specific Conditions (I Have Not Written An Conditions Enumeration), The "Goose" (presume have Goose) Entity In Reality Composer Pro Will Be Moved 1 Second After 3 Seconds:
RealityView { content in
if let model = try? await Entity(named: "Scene_Name_HERE", in: realityKitContentBundle) {
content.add(model)//For show your scene
if isUserConditions { //I Have Not Written An Conditions Enumeration
//When The User Meets Some Specific Conditions, Do This:
DispatchQueue.main.asyncAfter(deadline: .now() + 3) {//wait 3 sec
model.findEntity(named: "Goose")?.move(to: [Transform_Postion_Data_HERE], relativeTo: model.findEntity(named: "Goose")?, duration: 1.0)//Goose Move
}
}
}
}
Hope it will be helpful to you!
Note: This is just a Enumeration, so the code may not be used directly or have errors.
Rest assured, for distant entities, they will not perceive it unless users undertake an extended journey in a highly vacated location. Naturally, ordinary users will not engage in such behavior.👀
Furthermore, the immersive environment of visionOS may automatically render the scene transparent upon the user’s walking a predetermined distance for security purposes. However, this setting should be adaptable based on the user’s surroundings.
You can try to use Room Tracking:
https://developer.apple.com/documentation/visionos/building_local_experiences_with_room_tracking
The position of the window cannot be modified through programming in VisionOS. For your requirements, I suggest you consider using immersive space to achieve the desired effect. This approach allows the window to move in sync with the user’s movements. To track the user’s head, you can utilize the anchorEnity feature in RealityView.
In normal times, your info won’t be gone, but it’s a good idea to save it just in case. The Beta system is a bit wonky, and if it crashes, you might lose some stuff you can’t get back.
In addition: I found the problem. There is no download option for Predictive Code Completion in the Compone in my Xcode settings. Text edit tells me that my country does not support it. I am in China, and the machine was also bought in China. Is the error of Xcode wrong or is it really not available in China? If so, can I solve it through VPN or other methods?