CIImage is just a recipe how to change an image buffer. When you render an image into a CIContext, this recipe is applied. You can see this recipe by checking [CIImage description]. All filters and kernels are applied in a working color space (should mostly be Display P3, if you asked me). However the pixels of the input image are coded in a particular image color space. When you render the output image, the values of the pixels of the input image are converted to the working colorspace before any filter is applied. After all the filters are applied, the pixels are again converted to the output colorspace (mostly Display P3 as well, if you render on screen (check the window colorspace) or sRGB if you render to a web JPG or Rec709 if you render into a video pixel buffer). Telling the CIContext all these colorspaces makes sure the correct color transformations are used and that the output image is exactly what you want.
Removing the colorspace of the input images, makes CIContext assumes a false input colorspace, I guess. Is the image darker or lighter then before?