- Thu Jan 07, 2021 2:35 am
#400363
Hi all,
I'm currently working on a project testing the feasibility of using Maxwell to simulate scenery in near and short wave infrared. Basically the approach I am taking is to create a scene in person (with boxes, various materials etc.), capture photos using infrared cameras, and then try to recreate these scenes using Maxwell and comparing the images to the render. Specifically, I am using blender to create the models, and then using Maxwell Studio 5.1 for the rest of the process.
I read somewhere on a manual that "Maxwell considers a spectrum which ranges from the Infrared to the Ultraviolet", and that "each pixel in the output image contains different amounts of spectral energy", which is then interpreted by the sensor and processed into an image. After spending a few weeks familiarising myself with the software I seem to be at an impasse - is it possible to somehow access this spectral data? Or perhaps could I reconfigure a custom camera to process the spectral data within specific wavelengths outside of the visible spectrum? I need to somehow find a way to make the output from Maxwell scientifically comparable to the data collected from the infrared cameras (being a grayscale photo, values corresponding to intensity of light within a specific band, ie approx 1 – 1.7μm for short wave IR)
Does anyone have any further insights / ideas for how I could continue?
Feel free to ask any questions if I wasn't clear enough and thank you for your time!
I'm currently working on a project testing the feasibility of using Maxwell to simulate scenery in near and short wave infrared. Basically the approach I am taking is to create a scene in person (with boxes, various materials etc.), capture photos using infrared cameras, and then try to recreate these scenes using Maxwell and comparing the images to the render. Specifically, I am using blender to create the models, and then using Maxwell Studio 5.1 for the rest of the process.
I read somewhere on a manual that "Maxwell considers a spectrum which ranges from the Infrared to the Ultraviolet", and that "each pixel in the output image contains different amounts of spectral energy", which is then interpreted by the sensor and processed into an image. After spending a few weeks familiarising myself with the software I seem to be at an impasse - is it possible to somehow access this spectral data? Or perhaps could I reconfigure a custom camera to process the spectral data within specific wavelengths outside of the visible spectrum? I need to somehow find a way to make the output from Maxwell scientifically comparable to the data collected from the infrared cameras (being a grayscale photo, values corresponding to intensity of light within a specific band, ie approx 1 – 1.7μm for short wave IR)
Does anyone have any further insights / ideas for how I could continue?
Feel free to ask any questions if I wasn't clear enough and thank you for your time!
Last edited by Scott Lee on Fri Jan 08, 2021 7:49 am, edited 1 time in total.