# Megapixels A GTK4 camera application that knows how to deal with the media request api. It uses opengl to debayer the raw sensor data for the preview. Chat: [#megapixels:brixit.nl](https://matrix.to/#/#megapixels:brixit.nl) on Matrix ## Building Before building this, build and install [libmegapixels](https://gitlab.com/megapixels-org/libmegapixels), [libdng](https://gitlab.com/megapixels-org/libdng) and [postprocessd](https://gitlab.com/megapixels-org/postprocessd). ```shell-session $ meson build $ cd build $ ninja $ sudo ninja install $ sudo glib-compile-schemas /usr/local/share/glib-2.0/schemas ``` # Post processing Megapixels only captures raw frames and stores .dng files. It captures a 5 frame burst and saves it to a temporary location. Then the postprocessing script is run which will generate the final .jpg file and writes it into the pictures directory. Megapixels looks for the post processing script in the following locations: * `./postprocess.sh` * `$XDG_CONFIG_DIR/megapixels/postprocess.sh` * `~/.config/megapixels/postprocess.sh` * `/etc/megapixels/postprocess.sh` * `/usr/share/megapixels/postprocess.sh` The bundled `postprocess.sh` script will copy the first frame of the burst into the picture directory as an DNG file. If dcraw and imagemagick are installed it will generate a JPG and also write that to the picture directory. It supports either the full `dcraw` or `dcraw_emu` from libraw. It is possible to write your own post processing pipeline by providing your own `postprocess.sh` script at one of the above locations. The first argument to the script is the directory containing the temporary burst files and the second argument is the final path for the image without an extension. For more details see `postprocess.sh` in this repository. # Video recording Video recording needs gstreamer framework, dcraw and imagemagick. You may need to "sudo apt install dcraw imagemagick". # Developing Megapixels is developed at: https://gitlab.com/megapixels-org/megapixels ## Source code organization * `camera_config.c` describes how cameras are configured. Contains no state. * `main.c` contains the entry point and UI portion of the application. * `quickpreview.c` implements fast preview functionality, including debayering, color correction, rotation, etc. * `io_pipeline.c` implements all IO interaction with V4L2 devices in a separate thread to prevent blocking. * `process_pipeline.c` implements all process done on captured images, including launching post-processing. * `pipeline.c` Generic threaded message passing implementation based on glib, used to implement the pipelines. * `camera.c` V4L2 abstraction layer to make working with cameras easier. * `device.c` V4L2 abstraction layer for devices. The primary image pipeline consists of the main application, the IO pipeline and the process pipeline. The main application sends commands to the IO pipeline, which in turn talks to the process pipeline, which then talks to the main application. This way neither IO nor processing blocks the main application and races are generally avoided. Tests are located in `tests/`.