Hello,
I have an iOS app that is recording audio that is working fine on iPads/iPhones. It asks for microphone permission and after that recording works.
I installed the same app on my M3 MacBook via TestFlight since iPad apps are supposed to work without a change that way. The app starts fine and everything, but it never asks for Microphone permission, so I can't record.
Do I need to do something to make this happen (this is not macCatalyst, its running the arm64 iPhone binary on macOS)
thanks
Apple Silicon
RSS for tagBuild apps, libraries, frameworks, plug-ins, and other executable code that run natively on Apple silicon.
Posts under Apple Silicon tag
50 Posts
Sort by:
Post
Replies
Boosts
Views
Activity
I am using XCode on my Mac Book Pro M1 and trying to run the iOS app being developed on my iPad (5th generation, 12.9 inch)
The devices are connected through USB-C cable.
I have enabled developer mode on my iPad, trusted the M1 device, the developer, and the app.
However the following appears on the iPad when trying to launch the app:
Unable to Verify App
An internet connection is required to verify trust of developer "Apple Development: ...". This app will not be available until verified
This seems to be an issue with M1 specifically, as other people seem to have this problem, and the application successfully runs on other iOS devices
Before Apple Silicon was a thing, I published a separate version of my iOS app for macOS. I'm no longer able to maintain this version, so I would like users to be able to use the iPadOS version of my app on Apple Silicon. Unfortunately I can't make it available through Prices & Availablity, because it says:
"Once your macOS version has been approved, your iOS app will no longer be available to Mac users." (translated from the German user interface)
How do I remove the previous version and publish the iPadOS version to the mac app store?
Hi, just got an Apple M3 Pro to try it out on some Jax operations. I see the development is actively ongoing so maybe this error can help.
This is the environment:
Metal device set to: Apple M3 Pro
systemMemory: 18.00 GB
maxCacheSize: 6.00 GB
jax: 0.4.26
jaxlib: 0.4.23
numpy: 1.26.4
python: 3.11.8 | packaged by conda-forge | (main, Feb 16 2024, 20:49:36) [Clang 16.0.6 ]
jax.devices (1 total, 1 local): [METAL(id=0)]
process_count: 1
platform: uname_result(system='Darwin', node='MKFL96VR9YT', release='23.4.0', version='Darwin Kernel Version 23.4.0: Wed Feb 21 21:44:54 PST 2024; root:xnu-10063.101.15~2/RELEASE_ARM64_T6030', machine='arm64')
This is a minimal example which produces an error, I think due to the fft part:
from jax import numpy as np
array = np.ones((16, 16))
np.fft.fft2(array)
This is the full traceback:
Traceback (most recent call last):
File "/Users/user/Downloads/wow.py", line 5, in <module>
np.fft.fft2(array)
File "/opt/anaconda3/envs/jaxmetal/lib/python3.11/site-packages/jax/_src/numpy/fft.py", line 216, in fft2
return _fft_core_2d('fft2', xla_client.FftType.FFT, a, s=s, axes=axes,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/anaconda3/envs/jaxmetal/lib/python3.11/site-packages/jax/_src/numpy/fft.py", line 210, in _fft_core_2d
return _fft_core(func_name, fft_type, a, s, axes, norm)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/anaconda3/envs/jaxmetal/lib/python3.11/site-packages/jax/_src/numpy/fft.py", line 102, in _fft_core
transformed = lax.fft(arr, fft_type, tuple(s))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/anaconda3/envs/jaxmetal/lib/python3.11/site-packages/jax/_src/traceback_util.py", line 179, in reraise_with_filtered_traceback
return fun(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^
File "/opt/anaconda3/envs/jaxmetal/lib/python3.11/site-packages/jax/_src/pjit.py", line 298, in cache_miss
outs, out_flat, out_tree, args_flat, jaxpr, attrs_tracked = _python_pjit_helper(
^^^^^^^^^^^^^^^^^^^^
File "/opt/anaconda3/envs/jaxmetal/lib/python3.11/site-packages/jax/_src/pjit.py", line 176, in _python_pjit_helper
out_flat = pjit_p.bind(*args_flat, **params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/anaconda3/envs/jaxmetal/lib/python3.11/site-packages/jax/_src/core.py", line 2788, in bind
return self.bind_with_trace(top_trace, args, params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/anaconda3/envs/jaxmetal/lib/python3.11/site-packages/jax/_src/core.py", line 425, in bind_with_trace
out = trace.process_primitive(self, map(trace.full_raise, args), params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/anaconda3/envs/jaxmetal/lib/python3.11/site-packages/jax/_src/core.py", line 913, in process_primitive
return primitive.impl(*tracers, **params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/anaconda3/envs/jaxmetal/lib/python3.11/site-packages/jax/_src/pjit.py", line 1494, in _pjit_call_impl
return xc._xla.pjit(name, f, call_impl_cache_miss, [], [], donated_argnums, # type: ignore
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/anaconda3/envs/jaxmetal/lib/python3.11/site-packages/jax/_src/pjit.py", line 1471, in call_impl_cache_miss
out_flat, compiled = _pjit_call_impl_python(
^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/anaconda3/envs/jaxmetal/lib/python3.11/site-packages/jax/_src/pjit.py", line 1406, in _pjit_call_impl_python
lowering_parameters=mlir.LoweringParameters()).compile()
^^^^^^^^^
File "/opt/anaconda3/envs/jaxmetal/lib/python3.11/site-packages/jax/_src/interpreters/pxla.py", line 2369, in compile
executable = UnloadedMeshExecutable.from_hlo(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/anaconda3/envs/jaxmetal/lib/python3.11/site-packages/jax/_src/interpreters/pxla.py", line 2908, in from_hlo
xla_executable, compile_options = _cached_compilation(
^^^^^^^^^^^^^^^^^^^^
File "/opt/anaconda3/envs/jaxmetal/lib/python3.11/site-packages/jax/_src/interpreters/pxla.py", line 2718, in _cached_compilation
xla_executable = compiler.compile_or_get_cached(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/anaconda3/envs/jaxmetal/lib/python3.11/site-packages/jax/_src/compiler.py", line 266, in compile_or_get_cached
return backend_compile(backend, computation, compile_options,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/anaconda3/envs/jaxmetal/lib/python3.11/site-packages/jax/_src/profiler.py", line 335, in wrapper
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/opt/anaconda3/envs/jaxmetal/lib/python3.11/site-packages/jax/_src/compiler.py", line 238, in backend_compile
return backend.compile(built_c, compile_options=options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
jaxlib.xla_extension.XlaRuntimeError: UNKNOWN: <unknown>:0: error: 'func.func' op One or more function input/output data types are not supported.
<unknown>:0: note: see current operation:
"func.func"() <{arg_attrs = [{mhlo.layout_mode = "default", mhlo.sharding = "{replicated}"}], function_type = (tensor<16x16xf32>) -> tensor<16x16xcomplex<f32>>, res_attrs = [{jax.result_info = "", mhlo.layout_mode = "default"}], sym_name = "main", sym_visibility = "public"}> ({
^bb0(%arg0: tensor<16x16xf32>):
%0 = "mhlo.convert"(%arg0) : (tensor<16x16xf32>) -> tensor<16x16xcomplex<f32>>
%1 = "mhlo.fft"(%0) {fft_length = dense<16> : tensor<2xi64>, fft_type = #mhlo<fft_type FFT>} : (tensor<16x16xcomplex<f32>>) -> tensor<16x16xcomplex<f32>>
"func.return"(%1) : (tensor<16x16xcomplex<f32>>) -> ()
}) : () -> ()
<unknown>:0: error: failed to legalize operation 'func.func'
<unknown>:0: note: see current operation:
"func.func"() <{arg_attrs = [{mhlo.layout_mode = "default", mhlo.sharding = "{replicated}"}], function_type = (tensor<16x16xf32>) -> tensor<16x16xcomplex<f32>>, res_attrs = [{jax.result_info = "", mhlo.layout_mode = "default"}], sym_name = "main", sym_visibility = "public"}> ({
^bb0(%arg0: tensor<16x16xf32>):
%0 = "mhlo.convert"(%arg0) : (tensor<16x16xf32>) -> tensor<16x16xcomplex<f32>>
%1 = "mhlo.fft"(%0) {fft_length = dense<16> : tensor<2xi64>, fft_type = #mhlo<fft_type FFT>} : (tensor<16x16xcomplex<f32>>) -> tensor<16x16xcomplex<f32>>
"func.return"(%1) : (tensor<16x16xcomplex<f32>>) -> ()
}) : () -> ()
I'd be happy running more tests should you need them, I'm new to this, so not sure which just yet.
Many thanks!!
Hello,
When I build an XCode project with the Apple Silicon chip, I have some issues.
The project contains Pods and Swift Packages.
I could not run the application at all and always got the following error:
Could not find module '***' for target 'x86_64-apple-ios-simulator'; found: arm64, arm64-apple-ios-simulator, at: ***
I try to resolve this issue.
Always embed swift standard libraries = YES
Build Active Architure Only = YES
UIRequiredDeviceCapabilities = armv7
Excluded Architectures > Debug > Any iOS Simulator SDK arm64 add
Open Using Rosetta
Excluded Architectures > Debug > Any iOS Simulator SDK arm64 remove
However, issue always come to me. :(
Do you have any solution for this problem ?
Thank you by advance !
Ventura 13.2.1 M1
Sonoma 14.2.1 M2
In my app I have a signal handler.
When testing it with null-dereference I see that in previous MacOs versions like Monterey 12.0 x86 the signal handler is called.
However, on my Silicon Ventura/Sonoma machines its not called.
Tried with SIP enabled and disabled
So I created a binary with code:
#include <iostream>
int main() {
int *ptr = nullptr;
std::cout << *ptr; // Dereference null pointer
return 0;
}
Compiled it with:
g++ null.cpp -o null.bin
And executed it with and without sudo.
The app indeed crashes because of the null dereference (and core dump is created when SIP disabled).
However, no signal is recived. I am able to prove it with DTrace .
DTrace script:
#pragma D option quiet
proc:::signal-send
{
@[execname, stringof(args[1]->pr_fname), args[2]] = count();
}
END
{
printf("%20s %20s %12s %s\n",
"SENDER", "RECIPIENT", "SIG", "COUNT");
printa("%20s %20s %12d %@d\n", @);
}
Here is the output. In the left terminal I executed the binary. In the right terminal the script output.
On top of DTrace I created and MacOS endpoint-security app and subscribed to ES_EVENT_TYPE_NOTIFY_SIGNAL. Same there, no signal.
Did anything change with signals on M1/M2 MacOS 13.0 ?
After enabling the signal alternative stack using sigaltstack, the thread stack region size comes to be 128 MB.
The stack region size is only 8 MB without enabling the signal alternative stack. (get by calling the mach_vm_region_recurse function)
It only happens on macOS 14 or later, and M series silicon.
The growth of the stack region size results in a too-large minidump file when we generate a crash report using Google's crashpad library.
Could anybody tell me why the size has increased so much? And is there a way to work around it?
Tried various how-tos on youtube and github. Have conda.
Third step fails.
conda install -c apple tensorflow-deps
pip install tensorflow-macos
pip install tensorflow-metal
ERROR: Could not find a version that satisfies the requirement tensorflow-metal (from versions: none)
ERROR: No matching distribution found for tensorflow-metal
I see a lot of fixes for Intel-based Mac. None for M3. HELP!?
I have the XCode Version 15.3 (15E204a). When I try to compile my application, the following errors occur:
Undefined symbol: _GDTCCTConstructiOSClientInfo
Undefined symbol: _GDTCCTNetworkConnectionInfoNetworkMobileSubtype
Linker command failed with exit code 1 (use -v to see invocation)
Any solution to fix this issue?
I am using UDP communication in a app. Here is what i do,
Initialises a UDP broadcast connection object.
Bind it with a port to listen
Receives the IP & Port from the UDP connection to connect further with TCP connection.
After updating Xcode 15.3, It works until the iPad is connected with mac in debug mode. When i create build to test remotely, it stops receiving IP & Port from UDP connection.
Here is how i concluded this is Xcode issue,
I tried to debug this issue with Xcode 15.2 and it works as expected with debug and after creating build also.
Any help / suggestion would be appreciated.
When running an iOS app as designed for iPad on an m1 Mac mini the UIImagePickerController.isSourceTypeAvailable(.camera) api returns true leading to a crash (attached) if the camera is selected to upload an image to the app as my much loved Mac mini does not have a camera.
For the moment have disabled camera if platform is Mac by adding the qualification:
ProcessInfo().isiOSAppOnMac == false but this seems like a bug or does the crash also happen on Macs with cameras?
Other image picker options work fine.
Crash log
Hello,
I am developing a tool in python or nodejs to intercept flows between two applications (MITM).
I want to use the FRIDA library, but when I use it I get the following error: Error: module not found at "/usr/lib/libSystem.B.dylib".
Indeed, the library is not present in the folder. I tried to get help directly from FRIDA but I couldn't find any help and on the current forum I did see some posts talking about this problem but I couldn't solve it.
Do you have an idea of how to solve this problem.
Thank you.
Ps: I'm new to the APPLE ecosystem (Mac mini - Apple M1)
I am working on an Apple M2 Pro, MacOS Sonoma 14.3.1.
Today Xcode automatically updated from 15.2 to 15.3 and downloaded the new 17.4 simulators and runtime tools. I can no longer run my apps in simulator.
On Xcode 15.2, there was an option to choose the destination architecture, under Product -> Destination -> Destination Architectures -> Rosetta (the one I have ben required to select in order to run apps for the last few versions).
On Xcode 15.3, the option to chose the destination architecture is now missing.
I am still able to build successfully to my phone directly.
I am unable to build to a simulator, I get the same error for regarding a linker error failure.
I have tried:
reboot laptop
delete information stored in derived data
delete local Podfile.lock
delete Pods folder
pod install
reopen xcode
run on device - works!
run on 17.2 simulator - fails with error
run on 17.4 simulator - fails with error
Our Podfile looks like this:
require_relative '../node_modules/@react-native-community/cli-platform-ios/native_modules'
require_relative '../node_modules/react-native-permissions/scripts/setup'
platform :ios, '13.4'
prepare_react_native_project!
setup_permissions([
'AppTrackingTransparency',
'Camera',
'LocationAlways',
'LocationWhenInUse',
'Notifications',
])
target 'myapp' do
config = use_native_modules!
# @react-native-firebase/app requirement:
use_frameworks! :linkage => :static
$RNFirebaseAsStaticFramework = true
use_react_native!(
:path => config[:reactNativePath],
# to enable hermes on iOS, change `false` to `true` and then install pods
:hermes_enabled => false
)
# Pods for GoogleMaps on iOS
rn_maps_path = '../node_modules/react-native-maps'
pod 'react-native-google-maps', :path => "#{rn_maps_path}"
pod 'react-native-camera', path: '../node_modules/react-native-camera', subspecs: [
'BarcodeDetectorMLKit'
]
pod 'RNSquareInAppPayments', :path => '../node_modules/react-native-square-in-app-payments'
target 'myappTests' do
inherit! :complete
# Pods for testing
end
# Enables Flipper.
#
# Note that if you have use_frameworks! enabled, Flipper will not work and
# you should disable the next line.
# use_flipper!("Flipper-DoubleConversion" => "1.1.7") #avoid duplicate symbols for architecture x86_64 for Folly
post_install do |installer|
react_native_post_install(installer)
installer.pods_project.targets.each do |target|
if target.name == "RCT-Folly"
target.build_configurations.each do |config|
config.build_settings['GCC_PREPROCESSOR_DEFINITIONS'] ||= ['$(inherited)', 'FOLLY_HAVE_CLOCK_GETTIME=1']
end
end
end
end
end
I'm open to additional suggestions, at this point, I can't see a way to tell xcode that we need the other build option (like specifying Rosetta which I used to be able to do). Also, if anyone can help me understand why it is doing this, I'm busy on Google, but not finding what I'm looking for, so I wonder if I'm looking for the right things.
Thanks so much!
Error:
We develop virtual instruments for Mac/AU and are trying to get our AU-Plugins and our Standalone player to work with Audio Workgroups.
When the Standalone App or Logic Pro is in the foreground and active all is well and as expected.
However when the App or Logic Pro is not in focus all my auxiliary threads are running on E-Cores. Even though they are properly joined to the processing thread's workgroup. This leads to a lot of audible drop outs because deadlines are not met anymore.
The processing thread itself stays on a p-core. But has to wait for the other threads to finish.
How can I opt out of this behaviour? Our users certainly have use cases where they expect the Player to run smoothly even though they currently have a different App in focus.
Im honestly a bit lost and looking for general pointers. Here is the general flow of my project. I have an Xcode project where I want to return and convert the temperature values accessed from the apple smc and I found a GitHub repo with all the smc key sensors for the M3Pros/Max chips: https://github.com/exelban/stats/issues/1703 basically, I have all these keys stored in an array in obj-c like so:
NSArray *smcKeys = @[ @"Tp01", @"Tp05", @"Tp09", @"Tp0D", @"Tp0b", @"Tp0f", @"Tp0j", @"Tp0n",@"Tp0h", @"Tp0L", @"Tp0S", @"Tp0V", @"Tp0z", @"Tp0v", @"Tp17", @"Tp1F", @"Tp1J", @"Tp1p", @"Tp1h", @"Tp1R", ];
I am passing all these keys by passing 'smcKeys' in a regular C code file I have here that is meant to open, close and read the data shown here:
#include "smc.h"
#include <mach/mach.h>
#include <IOKit/IOKitLib.h>
#include "smckeys.h"
io_connect_t conn;
kern_return_t openSMC(void) {
kern_return_t result;
kern_return_t service;
io_iterator_t iterator;
service = IOServiceGetMatchingServices(kIOMainPortDefault, IOServiceMatching("AppleSMC"), &iterator);
if(service == 0) {
printf("error: could not match dictionary");
return 0;
}
result = IOServiceOpen(service, mach_task_self(), 0, &conn);
IOObjectRelease(service);
return 0;
}
kern_return_t closeSMC(void) {
return IOServiceClose(conn);
}
kern_return_t readSMC(char *smcKeys, SMCVal_t *val) {
kern_return_t result;
uint32_t keyCode = *(uint32_t *)smcKeys;
SMCVal_t inputStruct;
SMCVal_t outputStruct;
inputStruct.datasize = sizeof(SMCVal_t);
inputStruct.datatype = 'I' << 24; //a left shift operation. turning the I into an int by shifting the ASCII value 24 bits to the left
inputStruct.data[0] = keyCode;
result = IOConnectCallStructMethod(conn, 5, &inputStruct, sizeof(SMCVal_t), &outputStruct, (size_t*)&inputStruct.datasize);
if (result == kIOReturnSuccess) {
if (val -> datasize > 0) {
if (val -> datatype == ('f' << 24 | 'l' << 16 | 't' << 8 )) { //bit shifting to from 32bit operation associated with the ASCII charecters'f', 'l', and 't', sets datatype field.
double temp = *(double *)val -> data;
return temp;
}
}
}
return 0.0;
}
Which I am then then calling the functions from this file in a swift file and converting the values to Fahrenheit but no data is being printed in my console:
import IOKit
public class getTemperature {
public struct SMCVal_t {
var datasize: UInt32
var datatype: UInt32
var data: (UInt8, UInt8, UInt8, UInt8, UInt8, UInt8, UInt8, UInt8)
}
@_silgen_name("openSMC")
func openSMC() -> kern_return_t
@_silgen_name("closeSMC")
func closeSMC() -> kern_return_t
@_silgen_name("readSMC")
func readSMC(key: UnsafePointer<CChar>?,val: UnsafeMutablePointer<SMCVal_t>) -> kern_return_t
func convertAndPrintTempValue(key:UnsafePointer<CChar>?,scale: Character, showTemp: Bool ) -> kern_return_t {
let openSM = openSMC()
guard openSM == 0 else {
print("Failed to open SMC: \(openSM)")
return kern_return_t()
}
let closeSM = closeSMC()
guard closeSM == 0 else {
print("could not close SMC: \(closeSM)")
return IOServiceClose(conn)
}
func convertAndPrint(val: SMCVal_t) -> Double {
if val.datatype == (UInt32("f".utf8.first!) << 24 | UInt32("l".utf8.first!) << 16 | UInt32("t".utf8.first!) << 8) {
let extractedTemp = Double(val.data.0)
return( extractedTemp * 9.0 / 5.0 + 32.0 )
}
return 0.0
}
let smcValue = SMCVal_t(datasize: 0, datatype: 0, data: (0,0,0,0,0,0,0,0))
let convertedVal = convertAndPrint(val: smcValue)
print("Temperarure:\(convertedVal)F°")
return kern_return_t()
}
}
I know this is a lot but I am honestly looking for any tips to fill in any gaps in my knowledge for anyone who's built a similar application meant to extract any sort of data from Mac hardware.
I'm working on an app for macOS where it would be very useful to display the GPU (graphics card) workload usage as a percentage. CPU usage monitoring is easy, but GPU monitoring on Apple Silicon is next-to-impossible. Apple only seems to give us our app’s GPU usage which is not what we want, since we want to total GPU workload for the whole system. I'm using the latest version of Xcode and Swift, any ideas how to achieve this?
I'm struggling with compiling lib opus so that it works in the simulator on Apple silicon. I found a thread on the forums that seems to address part of the issue, but I am unable to build the static lib so that it shows the platform it is targeting.
The thread mentions that I should be able to run otool and see a "load commands" that indicate the platform.
When I run otool against the static library that we have created, it doesn't list any load commands. I don't see LC_BUILD_VERSION or LC_VERSION_MIN_***. Why would there not be any "Load command" entries?
% otool -l -arch arm64 dependencies/lib/libopus.a
Archive : dependencies/lib/libopus.a
dependencies/lib/libopus.a(bands.o): is an LLVM bit-code file
dependencies/lib/libopus.a(celt.o): is an LLVM bit-code file
dependencies/lib/libopus.a(celt_encoder.o): is an LLVM bit-code file
dependencies/lib/libopus.a(celt_decoder.o): is an LLVM bit-code file
...
dependencies/lib/libopus.a(mlp.o): is an LLVM bit-code file
dependencies/lib/libopus.a(mlp_data.o): is an LLVM bit-code file
The static library has the two architectures embedded in it, but when compiling the framework for the simulator platform the linking phase complains that we are building for the simulator, but linking object code built for ios.
% lipo -info dependencies/lib/libopus.a
Architectures in the fat file: dependencies/lib/libopus.a are: x86_64 arm64
In case you are curious, I'm just piggybacking on this project that has a build-libopus.sh script in the root directory that builds the official open source Opus library files. My hope is to build this static library for ios, ios-simulator, and mac-catalyst platforms and then include them in a xcframework.
Is there a way in pure C (not Objective-C, and certainly not Swift) way to detect if an iOS app is running on a Mac? I'm aware of iOSAppOnMac, but AFAICT I'd need to write Objective-C code to use it. My application is quite old and large, with portions going back to the 1980s, so the burden of moving on the anything that can't be accessible directly from C is quite high.
Consider the following program, memory-leak.c:
#include <stdlib.h>
void *p;
int main() {
p = malloc(7);
p = 0; // The memory is leaked here.
return 0;
}
If I compile this with clang memory-leak.c and test the output with the built-in MacOS memory leak detector leaks using leaks -quiet -atExit -- ./a.out, I get (partly) the following output:
1 leak for 16 total leaked bytes.
However, if I remove the 'leaking' line like so:
#include <stdlib.h>
void *p;
int main() {
p = malloc(7);
return 0;
}
Compiling this file and again running leaks now (partly) returns:
0 leaks for 0 total leaked bytes.
The man page for leaks shows that it is only un-reachable memory that is considered a leak. Is there a configuration to detect un-free'd reachablemalloc segments?
#include <stdio.h>
int main() {
unsigned long a, d;
__asm__ volatile (
"\n\t"
"movl $0x77777777, %%eax\n\t"
"movl $0xffffffff, %%ecx\n\t"
"xorl %%edx, %%edx\n\t"
"divl %%ecx\n\t"
"cwtd\n\t"
"movq %%rax, %0\n\t"
"movq %%rdx, %1\n\t"
: "=r"(a), "=r"(d)
:: "rax", "rdx"
);
printf("rax: %lx, rdx, %lx", a, d);
}
The minimal program above was expected to give rax: 0, rdx: 77770000 but on macOS with Rosetta 2 it gave rax: 0, rdx, 0. It causes specfic programs (e.g. Genshin Impact) to crash.