Thursday, August 17, 2017

TrinamiX 3D Sensor Paper

BASF spinoff Trimamix publishes an paper "Focus-Induced Photoresponse: a novel optoelectronic distance measurement technique" by Oili Pekkola, Christoph Lungenschmied, Peter Fejes, Anke Handreck, Wilfried Hermes, Stephan Irle, Christian Lennartz, Christian Schildknecht, Peter Schillen, Patrick Schindler, Robert Send, Sebastian Valouch, Erwin Thiel, and Ingmar Bruder.

"Here we introduce Focus-Induced Photoresponse (FIP), a novel method to measure distances. In a FIP-based system, distance is determined by using the analog photoresponse of a single pixel sensor. This means that the advantages of high-density pixelation and high-speed response are not necessary or even relevant for the FIP technique. High resolution can be achieved without the limitations of pixel size, and detectors selected for a FIP system can be orders of magnitude slower than those required by ToF based ones. A system based on FIP does not require advanced sensor manufacturing processes to function, making adoption of unusual sensors more economically feasible.

In the FIP technique, a light source is imaged onto the photodetector by a lens. The size of its image depends on the position of the detector with respect to the focused image plane. FIP exploits the nonlinearly irradiance-dependent photoresponse of semiconductor devices. This means that the signal of a photodetector not only depends on the incident radiant power, but also on its density on the sensor area, the irradiance. This phenomenon will cause the output of the detector to change when the same amount of light is focused or defocused on it. This is what we call the FIP effect.

Qualcomm on VR Motion Tracking Setup

Qualcomm presentation "On-device motion tracking for immersive mobile VR" discusses 4-camera setup in the company's VR headset:

First Report on CIS Reproducibility, Variability and Reliability

Albert Theuwissen releases the first report on "Reproducibility, Variability and Reliability of CIS" in a 5-year series. The 175-page report contains 118 figures and 98 tables with data on QE, FPN, DSNU, Qsat, DR, SNR, DC, and more.

Wednesday, August 16, 2017

Coolpad CEO Quantifies Dual Camera Advantages

Coolpad, one of the large China-based smartphone makers that uses 13MP monochrome + 13MP RGB dual camera in its devices, says about the benefits of that configuration:

"The two lenses may look the same, but they have very different functions. One shoots in RGB to produce a color image, while the other takes care of the monochrome images. The monochrome lens brings out the detail, and engage more light than the RGB lens when in low-light condition, which takes care of the colors. The Dual camera 2.0 technology in Cool Dual actually enhanced the overall clarity of the image by 20%, help reduce image noise by 8% and improved brightness by 20%. “With these, we believe the real dual 13MP cameras brings us smart framing and the 6P lens gives customers the best quality of pictures”, said Jeff Liu, Coolpad Group CEO."

Digitimes on Automotive LiDAR Adoption

Digitimes Research comes up with its analysis of LiDAR adoption in the car industry, forecasting first LiDAR-equipped production cars appearing this year:

"...Audi will take the initiative to launch car models equipped with LiDAR sensors in 2017 and Mercedes Benz, Cadillac, Ford and Volvo are expected to follow suit in 2018-2019...

...only one million LiDAR sensors worth US$200 million will be shipped globally in 2019. However, global shipment value for LiDAR sensors will fast grow to US$500 million in 2024 along with decreasing cost and increasing adoption.

For Level 2 self-driving... LiDAR sensors are required to detect objects as far as 100-150 meters ahead. For Level 3, the required range is 200-300 meters

...Velodyne LiDAR and Quanergy Systems, and Germany-based Ibeo Automotive Systems are the main vendors globally of LiDAR sensors, with the former two focusing on solid-state models to reduce LiDAR sensor sizes.

Qualcomm Unveils Structured Light Depth-Sensing Platform

Qualcomm announces an expansion to the Qualcomm Spectra Module Program to incorporate a biometric authentication and high-resolution depth sensing for a broad range of mobile devices and head mounted displays (HMD). This module program is built on the 2nd generation Spectra embedded ISP family.

Now, the camera module program is being expanded to include new camera modules capable of utilizing active sensing for biometric authentication, and structured light for a variety of computer vision applications that require real-time, dense depth map generation and segmentation.

The low-power, high-performance motion tracking capabilities of the Qualcomm Spectra ISP, in addition to optimized simultaneous localization and mapping (SLAM) algorithms, are designed to support new extended reality (XR) use cases for VR and AR applications that require SLAM.

It also features multi-frame noise reduction for superior photographic quality, along with hardware-accelerated motion compensated temporal filtering (MCTF), and inline electronic image stabilization (EIS) for superior camcorder-like video quality.

The Spectra family of ISPs and new Spectra camera modules are expected to be part of the next flagship Snapdragon Mobile Platform.

Qualcomm Emerging Vision Technologies presentation gives some use cases for its 3D depth sensing module.

Qualcomm also publishes a Youtube demo if its structured light depth sensing module:

Tuesday, August 15, 2017

Pinnacle Introduces HDR ISP Core

Pinnacle Imaging Systems launches Denali-MC HDR ISP IP core said to preserve a scene’s color fidelity and full contrast range throughout the tone mapping process, all without producing halos, color shifts, and undesired motion artifacts.

Denali-MC provides a 16-bit data path capable of producing 100 dB or 16-EV steps of dynamic range. Denali-MC HDR IP completely eliminates halo artifacts and color shifts, and mitigates the ghost artifacts and transition noise often seen when merging multiple exposures. This allows Denali-MC to capture up to four exposure frames from 1080p video at 120 fps, while merging and tone mapping at 30 fps in real time. For applications requiring faster output frame rates, Denali-MC also supports a two frame merge mode exporting at 60 fps. Furthermore, Denali-MC can support up to 29 different CMOS sensors, including 9 Aptina/ON Semi, 6 Omnivision and 11 Sony sensors, and 12 different pixel-level gain and frame-set HDR methods, and is said to be easily ported to the most widely-used logic platforms.

HDR-Specific Features:
  • Advanced motion compensation algorithms virtually eliminate HDR merge artifacts and transition noise
  • Proprietary Locally Adaptive Tone Mapping technology preserves color fidelity through the entire tonal range without creating halo artifacts or color shifts
  • Automatic EV bracketing
  • Automatic or manual contrast adaptation for global or local video correction
  • React concurrent still frame and video capture feature, non-destructively extracts four source LDR Bayer images, merged Bayer HDR, tone mapped Bayer or HDMI RGB still frames without interrupting video
  • Ability to capture separate HDR and tone mapped output video streams concurrently (ideal for ADAS applications)
  • Two or four frame multiple exposure merge (with Sony IMX290 implementation)
  • HDR + Low illumination capabilities with Sony IMX290 sensor enable 24/7 round the clock video capture capabilities for any contrast and lighting condition

At Fairchild Imaging, we have been very impressed with the Denali-MC ISP state of the art locally adaptive tone mapping (LATM) functionality,” said Vern Klein, Director of Sales and Marketing Fairchild Imaging. “I’ve seen first-hand how they can make a great sensor perform even better in its native WDR mode. Camera manufacturers will benefit from this technology, which provides high quality HDR functionality without requiring companion chips or additional hardware cost to support the algorithms.

Pinnacle’s new Denali-MC HDR ISP is a significant achievement addressing HDR video requirements in surveillance, monocular camera automotive markets and machine learning with its customization, artifact compensation, color accuracy and quantifiable high dynamic range of 100 dB,” said Paul Gallagher, image sensor industry veteran and futurist. “Camera system developers in these markets would benefit from utilizing these attributes of the Denali-MC ISP as a standalone ISP or integrate Pinnacle’s HDR IP blocks within their existing ISP.

Pinnacle also publishes a video demo of its HDR capabilities (short version, long version):

Technavio Low-Light Imaging Market Report

BusinessWire: Technavio comes up with another unusual report "Global Low Light Level Imaging Sensors Market 2017-2021." Few quotes:

"In 2016, the night vision devices segment accounted for close to 55% of the total revenues due to high adoption of low light level imaging sensors by the defense sector. The increasing focus on reducing road accidents drives the demand for night vision systems. Night vision systems are offered as built-in systems by Audi, BMW, and Toyota. The market is expected to grow at a CAGR of close to 16% during the forecast period.

The global low light level imaging sensors market by cameras contributed 25% of the total revenues in 2016. These sensors are used in applications such as home security cameras, small business monitoring, and infrastructure security.

In 2016, the global low light level imaging sensors market by optic lights accounted for around 13% of the total revenue in 2016. These are widely used in lighting, decorations, and mechanical inspections of obscure things. Optic lights save space and provide superior lighting, and are therefore used in vehicles. Low light level imaging sensors are crucial components of optic lights as these sensors intensify the range of light.

Saturday, August 12, 2017

SMIC Reports Rising CIS Sales

SeekingAlpha publishes SMIC Q2 2017 earnings call saying that the foundry sees a significant growth in its CIS process volume:

"By device, most of our year-on-year revenue growth came from CMOS image sensors, NOR Flash, application processors and the power ICs.

And for the existing customer, for... the CMOS imager applications, we also see the recovery, yes.

Friday, August 11, 2017

Sony Kumamoto Earthquake Video

Imaging Resource publishes a video on Sony Kumamoto fab earthquake damage and a recovery effort that brought fab back to life in a so sort time:

LG Announces Smartphone Camera with F1.6 Lens

BusinessWire: The upcoming LG V30 smartphone is said to have camera module with F#1.6 lens, said to be the brightest in smartphones. I wonder what is the effective aperture of the pixels in the sensor and whether it makes use of the so bright lens.

Thursday, August 10, 2017

Sony introduces 2.9um Pixel 1080p60 HDR Sensors

Sony introduces IMX307 and IMX327 sensors featuring 2.9um pixels. The 1080p60 sensors support multiple-exposure and DOL-HDR and have 12b ADC.

Wednesday, August 09, 2017

ESPROS CEO Interview

Autosens publishes an interview with ESPROS CEO Beat De Coi. Few quotes:

Q: Why did you decide to buck the trend in semiconductors to have your own foundry?

A: Simply there was or is no silicon CMOS technology available which offered high Quantum Efficiency in NIR in combination with high performance CCD. Our backside illumination technology OHC15L offers what is needed for powerful LiDAR, TOF and in general ultrafast gated imagers.

Q: Your session on time-of-flight sensors is about next generation technology – what’s different about it from existing ToF?

A: - Quantum Efficiency is greater than 75% @905nm
- active ambient light suppression
- single pulse operation vs. continuous light modulation in previous generation systems

Q: What is the timescale of this technology coming to the market?

A: Early 2018

Tuesday, August 08, 2017

Stacking Technologies Overview

Phil Garrou IFTLE 346 overviews image sensor staking history since Sony introduced it in 2013:

Grenoble-Lyon Imaging Valley Guide

EETimes publishes a useful article "Engineer's Guide to Imaging Valley" by Junko Yoshida. Most of the France’s imaging companies are concentrated in the Grenoble/Lyon area:

The article gives a very nice overview of the history and contemporary state of the French Imaging Valley.

Thanks to JD for the link!

ON Semi Exits Mobile CIS Business, Licenses-Out Related Patents

SeekingAlpha Q2 2017 earnings call transcript gives some interesting info about ON Semi image sensor business:

"During the second quarter, we exited the mobile image sensor market as the margin profile for that business was not comparable with our target financial model.

Furthermore, we monetize the value of highly differentiated mobile imaging technology, through an intellectual property licensing agreement with a third-party. We have excluded the gain of approximately $24 million related to this transaction from our second quarter non-GAAP results.

Second quarter free cash flow and operating cash flow included approximately $24 million from a licensing arrangement related to the mobile image sensor business.

For the second quarter, we again posted strong growth in our CMOS image sensor business for viewing and ADAS applications. We continue to gain market share in automotive image sensors and our design win pipeline for our CMOS image sensors for automotive applications continues to grow at a rapid pace.

We continue to see strong growth in machine vision applications with our PYTHON line of CMOS image sensors. As I indicated earlier, we are engaging at the very early stage with key players in artificial intelligence for machine vision and robotics applications.

The company's spreadsheet shows the proportion of image sensor business over the years:

Sunday, August 06, 2017

UNC Chapel Hill Asynchronous Sensor Architecture

University North Caroline at Chapel Hill presents "A Frameless Imaging Sensor with Asynchronous Pixels: An Architectural Evaluation" by Montek Singh, Pintian Zhang, Andrew Vitkus, Ketan Mayer-Patel, and Leandra Vicci at 2017 IEEE Intl Symposium on Asynchronous Circuits and Systems.

"The goal of this work is to develop a novel CMOS camera sensor that provides frameless capture, and has significantly higher dynamic range, finer color sensitivity, and lower noise as compared to the current state-of-the-art sensors. The strength of the approach lies not in developing new types of photodetectors or amplifiers, but in the manner in which information is extracted from the pixel sensor, transported to the processing logic, and processed to yield intensity values. At the heart of the sensor is an asynchronous network to transport events from the pixel sensors to the off-grid processing circuitry. The asynchronous nature of pixel communication is the key to achieving frameless image capture."

The UNC Chapel Hill research group is looking for industrial partners who might be interested in the IP and in partnering with the group for further development.

Saturday, August 05, 2017

Chronocam in Novus Light

NovusLight publishes an article "Vision Inspired by Biology" based on the talk with Luca Verre, the CEO and co-founder of Chronocam. Few quotes:

"Based on the new technology concept, the company recently released a QVGA resolution (320 by 240 pixels) sensor with a pixel size of 30-microns on a side and quoted power efficiency of less than 10mW.

Because the sensor is able to detect the dynamics of a scene at the temporal resolution of few microseconds (approximately 10 usec) depending on the lighting conditions, the device can achieve the equivalent of 100,000 frames/sec.

In Chronocam’s vision sensor, the incident light intensity is not encoded in amounts of charge, voltage, or current but in the timing of pulses or pulse edges. This scheme allows each pixel to autonomously choose its own integration time. By shifting performance constraints from the voltage domain into the time domain, the dynamic range is no longer limited by the power supply rails.

Therefore, the maximum integration time is limited by the dark current (typically seconds) and the shortest integration time by the maximum achievable photocurrent and the sense node capacitance of the device (typically microseconds). Hence, a dynamic range of 120dB can be achieved with the Chronocam technology.

Due to the fact that the imager also reduces the redundancy in the video data transmitted, it also performs the equivalent of a 100x video compression on the image data on chip.

The company announced it had raised $15 million in funding from Intel Capital, along with iBionext, Robert Bosch Venture Capital GmbH, 360 Capital, CEAi and Renault Group.

From the company presentation at Event-based Vision Workshop 2017:

Friday, August 04, 2017

Smartphone Cameras Go from Double to Triple

GSM Arena reports that the upcoming Vivo X20 (Xplay 7) smartphone features 5 cameras - double on the front and triple on the rear sides:

IISW 2017 Papers On-Line

International Image Sensors Workshop 2017 held in Hiroshima, Japan on May 30-June 2, publishes all 110 presented papers on-line: 61 regular papers, 45 posters, and 4 invited papers. Some of the papers also have presentation slides published. It's a very good read over the weekend!

Thursday, August 03, 2017

Yole Uncooled Thermal Imaging Report

Yole Developpement releases "Uncooled Infrared Imagers market & technology trends" report: "Today the uncooled IR camera market is showing an 8% CAGR between 2016 and 2022 reaching almost US$ 4.4 billion at the end of the period. Only few players control this industry. In 2016, two leading companies, FLIR and ULIS, both with different market strategies and solutions, owned more than 75% of the total market (in volume).

Besides FLIR and ULIS, many others players are also benefiting from IR imaging market growth:

• SEEK Thermal has introduced its new, higherperformance RevealPRO, and CompactPRO products as the company moves from consumer products to more high-end products.

• Players such as BAE Systems or Leonardo DRS are benefiting from the defense market growth cycle that could still last for a few more years.

• Newcomers are introducing their products, for example, Teledyne Dalsa released its first Vox microbolometers in 2017.

• Many companies in China are developing their own microbolometers. They do not produce large volumes today but the domestic market has great potential.

• On the other hand companies like Bosch, long involved in the MEMS and infrared businesses, have changed their strategies.

"2016 was a good year for the microbolometer market. There were almost 900,000 uncooled IR camera shipments, worth $2.7B in revenues thanks to a dynamic commercial market and continued growth for military applications. Many commercial applications drove this growth, including thermography, surveillance, PVS and firefighting. In 2022, we estimate there will be 1.7M units shipped.

Thermography is still the leading commercial market by far, in both value and volume. We estimate that there will be 500,000 thermography units shipped annually by 2022. As camera prices continue to fall, with several new products below $1000, sales are growing.

Surveillance is another interesting market. Until recently, thermal cameras have primarily been used in high-end surveillance for critical and government infrastructure. New municipal and commercial applications with lower price points are now arising, including traffic, parking, power stations and photovoltaic planning. We estimate this market will grow at almost 17% over 2017-2022 to reach 300,000 units by 2022.

Night vision in cars, including autonomous vehicles, could boost the microbolometer market. China is already a large market for automotive night vision, absorbing 25% of the total number of systems produced. In coming years, China will continue to account for a high share of this market.

Himax Q3 2017 Update Focuses on 3D Imaging and NIR

Himax Q3 2017 report has an update on the company's imaging business:

The Company believes 3D sensing is among the most significant new features for the next generation smartphone. The Company’s SLiM product line, based on structured light technology, is a state of the art total solution for 3D sensing. Himax’s goal is to provide total solutions with performance, size, power consumption and costs all suitable for smartphones and tablets. Himax offers fully integrated structure light modules, with the vast majority of the key technologies inside the module also developed and supplied by the Company. These critical in-house technologies include advanced optics utilizing the Company’s world leading WLO technology, laser driver IC, high precision active alignment for the projector assembly, high performance near-infrared CMOS image sensor and, last but not least, an ASIC chip for 3D depth map generation. The fact that all of these critical building blocks are developed in-house puts the Company in a unique position. Himax is able to react quickly and tailor its solutions to customers’ specific needs.

It also represents a very high barrier of entry for any potential competition and a much higher ASP for the Company. While the Company prefers to offer a total solution, it can also provide the aforementioned individual technologies separately to select customers so as to best accommodate their specific needs.

Thanks to the Company’s absolute technology leadership, its progress made with the fully integrated structure light 3D sensing total solution module is very exciting. Himax is seeing strong demand for 3D sensing solutions from numerous tier 1 customers. The Company is in close collaboration with select leading smartphone makers and partners right now, aiming to bring its total solution to mass production as early as early 2018 to meet the customers' aggressive launch timetables. Moreover, given that the Company is offering highly integrated solutions with ASPs much higher than those of individual components, by the time the Company starts shipping its total solutions, they will be a major contributor to both Himax’s revenues and profit, and consequently create a more favorable product mix for the Company.

Himax continues to make great progress with its two machine vision sensor product lines, namely, near infrared (“NIR”) sensor and Always-on-Sensor (“AoS”). The Company’s NIR sensor is a critical part in the structured light 3D sensing total solution. The Company’s NIR sensors’ overall performance is far ahead of those of its peers in 3D sensing applications. Himax currently offers low noise HD, or 1 megapixel, and 5.5 megapixel NIR sensors and is planning to add more to further enrich its product portfolio. Himax’s NIR sensors deliver superior quantum efficiency in the NIR range, especially over 940nm band which is critical for outdoor applications.

The Company’s AoS solutions provide super low power computer vision, which enables new applications across a wide variety of industries. The ultra-low power, always-on vision sensor is a powerful solution capable of detecting, tracking and recognizing its environment in an extremely efficient manner using just a few milliwatts of power. The Company is pleased to report that it already has one major global brand leveraging its AoS in their new high end TV models, which have already hit the market.

For the traditional human vision segments, Himax sees strong demand in notebooks and increased shipments for multimedia applications such as car recorders, surveillance, drones, home appliances, and consumer electronics, among others

Sony on Automotive Image Sensing

AutoSens publishes an interview with Abhay Rai, Director Product Marketing: Automotive Imaging at Sony. Few quotes:

Q: What are sensor problems for autonomous driving?

A: Due to the emergence of safety critical application, basic performance requirements have changed. Sensors need to reliably work for long time. Reliable operating temperature range, functional safety, security, power, and heat are very important performance parameters that a safety critical system design need to address.

Challenge for an image sensor for autonomous driving is: Can a sensor still see well under all real world conditions (reliable working and always working).

Q: Why can’t we just use dashcam footage on YouTube for finding problems?

A: Dashcam footage can be a datapoint but autonomous driving system requires more real world data and real world simulation to get to the best, robust and the safest autonomous driving car.

NHK Open House 2017

NHK Open House held in Tokyo in May 2017 revealed the latest TV innovations:

3D-integrated image sensor with per-pixel interconnect (together with University of Tokyo): "We have managed to shrink the pixels from the previous size of approximately 80 × 80 µm² to approximately 50 × 50 µm²."

Fast 8K 240fps image sensor: "We developed a 33 megapixel image sensor capable of high-speed operation and constructed a prototype 8K high-speed camera supporting shooting at 240 fps, which is four times the frame rate in 8K test satellite broadcasting. This camera enables the shooting of fast-paced action such as that in sports in 8K ultrahigh-definition video."

And many other innovations including organic image sensor "with a charge multiplication photoelectric conversion fi lm in order to achieve highly sensitive 8K cameras" and "organic image sensors with three organic films that provide sensitivity for each of the primary colors."

Wednesday, August 02, 2017

Pixart Q2 2017 Results

Pixart Q2 2017 revenue increased by 17.4% QoQ to NT$1,295.3 million. The gross margin increased from 52.8% in previous quarter to 55.8% in 2017 Q2 due to decrease in expenses and product mix change.

Yole on Apple Activity in Grenoble Area

EETimes article on fabs mentions Apple imaging group activity: "Or, take the example of the imaging-tech landscape growing rapidly in the Grenoble/Lyon area. Pierre Cambou, activity leader for imaging and sensors at Yole Développement, explained that imaging technology innovation often demands advancements in new manufacturing techniques. In return, it creates a tech-driven environment.

Reportedly, more than a dozen Apple engineers are moving to Grenoble to open an R&D center. This is happening precisely because the region has the expertise in image sensors and production — led by ST. “You need to have factories” to make an ecosystem, said Cambou.

Technavio Divides ToF Market to Regions

BusinessWire: Techvanio market reports often feature an interesting mix of quite obvious and quite unusual statements.

Sunil Kumar Singh, a lead analyst at Technavio, reports: "The growing popularity of augmented reality and virtual reality devices, 3D scanners, and gesture recognition technologies and the high investment in driverless cars by automobile manufacturers such as Ford, Nissan, and Tesla are expected to drive the global ToF market.

The demand for camera-enabled phones has been on the rise in South America and will drive the market for ToF sensors in this region. The replacement of CCD sensors with ToF sensors in many applications will also have a major impact on the ToF sensors market. The US, followed by Canada and Brazil, is the leading revenue generating country in the region owing to the early adoption of the technology

Global time of flight sensor market is expected to grow at a CAGR of 3% from 2017-2021. The consumer electronics segment accounted for close to 52% of the ToF sensor market share in 2016.

Tuesday, August 01, 2017

Digitimes Research: Sony Market Share is 45%

Digitimes Research believes that Sony took a 45% share of the global CIS market in 2016, while Samsung grabbed a 15% share of the market. The global CIS shipment value will grow to $11.2b in 2017 from $10.4b in 2016. The market is forecast to grow to nearly $13.8b in 2020.

Sony Image Sensor Sales Up but Forecast Down

Sony announces its quarterly earning results. The company updates about its image sensor business:

"Sales increased 41.4% year-on-year (a 38% increase on a constant currency basis) to 204.3 billion yen. This increase was primarily due to a significant increase in unit sales of image sensors for mobile products, as well as the absence of the impact of a decrease in image sensor production due to the 2016 Kumamoto Earthquakes in the same quarter of the previous fiscal year, partially offset by a significant decrease in sales of camera modules, a business which was downsized."

The forecast for 2017 fiscal year ending in March 2018 has been updated too:

"Sales are expected to be lower than the April forecast primarily due to lower-than-expected image sensor unit sales for mobile products, partially offset by the impact of foreign exchange rates. Operating income is expected to be higher than the April forecast mainly due to lower-than-expected production costs as well as the positive impact of foreign exchange rates, partially offset by the impact of the above-mentioned decrease in sales."

Monday, July 31, 2017

EDoF Rebirth

Extended Depth of Focus (EDoF) techniques used to be a popular topic 10-15 years ago, as long as the mainstream camera phone resolution has not exceeded 2MP. However, EDoF companies were unable to scale their resolution beyond that point.

Technology University of Tampere, Finland and FLIR seem to find a good application for EDoF in MWIR cameras, where resolutions are still low up to these days. Their paper "A novel two- and multi-level binary phase mask design for enhanced depth-of-focus" by Vladimir Katkovnik, Nicholas Hogasten, and Karen Egiazarian propose a novel algorithm and its implementation for MWIR cameras:

"A midwave infrared (MWIR) system is simulated showing that this design will produce high quality images even for large amounts of defocus. It is furthermore shown that this technique can be used to design a flat, single optical element, systems where the phase mask performs both the function of focusing and phase modulation."

University of Linz Lensless Camera

University of Linz, Austria, publishes a paper "Thin-film camera using luminescent concentrators and an optical Söller collimator" by Alexander Koppelhuber and Oliver Bimber.

"We discuss optical imaging capabilities and limitations, and present first prototypes and results. Modern 3D laser lithography and deep X-ray lithography support the manufacturing of extremely fine collimator structures that pave the way for flexible and scalable thin-film cameras that are far thinner than 1 mm (including optical imaging and color sensor layers)."

Thanks to TL for the link!

Low Quality LiDARs Restrain Self-Driving Car Progress

MIT Technology Review: Cheaper LiDARs may not deliver the quality of data required for driving at highway speeds:

"At 70 miles per hour, spotting an object at, say, 60 meters out provides two seconds to react. But when traveling at that speed, it can take 100 meters to slow to a stop. A useful range of somewhere closer to 200 meters is a better target to shoot for to make autonomous cars truly safe.

That’s where cost comes in. Even an $8,000 sensor would be a huge problem for any automaker looking to build a self-driving car that a normal person could afford.

Graeme Smith, chief executive of the Oxford University autonomous driving spinoff Oxbotica, told MIT Technology Review that he thinks a trade-off between data quality and affordability in the lidar sector might affect the rate at which high-speed autonomous vehicles take to the roads. Smith thinks that automakers might just have to wait it out for a cheap sensor that offers the resolution required for high-speed driving. “It will be like camera sensors,” he says. “When we first had camera phones, they were kind of basic cameras. And then we got to a certain point where nobody really cared anymore because there was a finite limit to the human eye.

Sunday, July 30, 2017

Image Sensor Companies Genealogy

I've prepared a list of image sensor companies genealogy, with a kind help of EF and DG. As one can understand, nobody's knowledge is complete, so please feel free to add more info and correct mistakes in comments. The link is also available in the left hand side links, next to the image sensor companies list.

Friday, July 28, 2017

Essential Dual Camera Tuning

Essential startup tells what does is take to tune an image processing pipeline for smartphone dual camera (RGB + Monochrome):

"Objective tuning is meant to ensure that each camera module sent to production is operating at an acceptable baseline level. It began with picking the correct golden and limit samples from the factory.

The golden samples are the modules whose characteristics most closely align to the average of our camera and the experience that most of our users will have. Once golden samples were collected, we used them to capture a series of images under various laboratory-controlled test conditions. The images from the golden samples were then used to train the ISP to recognize the unique characteristics of those modules. In other words, we taught the ISP to see the world in a certain way. We also tested other limit and random samples, which have different characteristics that are saved in the factory calibration data, to ensure that they are behaving like the golden samples in those scenes too. The objective tuning process lasted three months. By the end, all of our cameras were responding to the predefined lab scenes in an accurate and predictable fashion.

But even when a camera can repeat actions in a lab, it still needs to be taken into the field— because in real life a camera must be able to take the right picture in millions of different scenarios. Subjective tuning is what makes this possible. It is a painstaking, iterative process—but also one I find incredibly rewarding.

Our subjective tuning process began in January 2017, and during that time, we have gone through 15 major tuning iterations, along with countless smaller tuning patches and bug fixes. We have captured and reviewed more than 20,000 pictures and videos, and are adding more of them to our database every day.

Via: DPReview

AI News: Machine Learning for Stereo Depth Mapping, DNN Processor for Event Driven Sensors

San Francisco-based stealth startup PerceptiveIO publishes an open-access paper "UltraStereo: Efficient Learning-based Matching for Active Stereo Systems" by Sean Ryan Fanello, Julien Valentin, Christoph Rhemann, Adarsh Kowdle, Vladimir Tankovich, Philip Davidson, and Shahram Izadi.

"Mainstream techniques usually take a matching window around a given pixel in the left (or right) image and given epipolar constraints find the most appropriate matching patch in the other image. This requires a great deal of computation to estimate depth for every pixel.

In this paper, we solve this fundamental problem of stereo matching under active illumination using a new learning-based algorithmic framework called UltraStereo. Our core contribution is an unsupervised machine learning algorithm which makes the expensive matching cost computation amenable to O(1) complexity. We show how we can learn a compact and efficient representation that can generalize to different sensors and which does not suffer from interferences when multiple active illuminators are present in the scene. Finally, we show how to cast the proposed algorithm in a PatchMatch Stereo-like framework for propagating matches efficiently across pixels.

ETH Zurich publishes PhD Thesis "Deep Neural Networks and Hardware Systems for Event-driven Data" by Daniel Neil.

"This thesis introduces hardware implementations and algorithms that use inspiration from deep learning and the advantages of event-based sensors to add intelligence to platforms to achieve a new generation of lower-power, faster-response, and more accurate systems."

Qualcomm announces Snapdragon Neural Processing Engine (NPE) SDK running on Kryo CPU, Adreno GPU or Hexagon image processing DSP. Facebook announced plans to integrate the Snapdragon NPE into the camera of the Facebook app to accelerate Caffe2-powered AR features. By utilizing the Snapdragon NPE, Facebook can achieve 5x better performance on the Adreno GPU, compared to a generic CPU implementation.

Deutsche Bank Analysts on Apple 3D Sensing Plans

Deutsche Bank analysis of active alignment (AA) systems supplier ASM Pacific Technology (ASMPT) has interesting info on 3D sensing adoption in future iPhones and dual cameras in future Samsung phones:

"We expect ASMPT’s AA machine sales to grow only 10% YoY in 2018 and stay flat YoY in 2019, after 56% YoY growth in 2017 (Figure 8). Most camera module makers should upgrade their AA machines in 2017. Notably, we believe Apple will not implement 3D sensing for 4.7” and 5.5” iPhones in 2018. This means Apple supply chain will not procure new AA machines for 3D sensing from ASMPT in 2018 (i.e., ASMPT is benefiting from Apple’s adoption of 3D sensing for 5.8” OLED iPhone in 2017).

We estimate camera module makers could upgrade their AA machines every three years due to rapid specs migration in dual cameras for smartphones. This is shorter than the normal duration of five to six years for a CIS (CMOS image sensor) machine. However, ASMPT’s AA business could still see a sales growth deceleration in 2018/19, even assuming a shorter duration of AA machines.

Thursday, July 27, 2017

Light Publishes L16 Full Resolution Images

Two weeks after Light L16 computational camera shipments start, there is still no single user review anywhere on the web. However, LightRumors notices that Light Co. has released few full resolution images on its web site. The images are processed using Light’s proprietary software, Lumen, which is powered by Light's proprietary Polar Fusion engine. The engine computationally fuse the many images captured by the L16 to create one high-quality image.

Light Co. also publishes a nice tutorial explaining the L16 camera operation and technology.

Samsung to Allocate Capex for CIS Foundry Business

Samsung Q2 2017 earnings report mentions a capex allocation fro CIS foundry business:

"The Foundry Business is... to allocate sizable capex for converting part of line 11 from DRAM to image sensor production in the second half [of 2017]."

The company also mentions a healthy state of the image sensor sales:

"The System LSI Business increased sales of mobile processors and image sensors.

System LSI Business earnings improved QoQ... Sales of image sensors also contributed to earnings.

Growing market adoption of dual camera solutions will also boost image sensor shipments.

Wednesday, July 26, 2017

ST Reports Strong FlightSense Sales

ST Micro reports Q2 2017 results. Regarding the imaging business, the company says "As anticipated, Imaging revenues in the second quarter decreased slightly on a sequential basis to $68 million, while we prepare for the ramp of new programs.

On a year-over-year basis, Imaging revenues increased 60% in the second quarter, and for the first half 2017 rose 83% to $140 million driven by ST’s innovative Time-of-Flight technology.

In the second quarter we continued to gain design-wins while delivering high volumes of our “FlightSense” Time-of-Flight proximity and ranging sensors to multiple smartphone OEMs. We now have reached cumulative shipments of over 300 million Time-of-Flight sensors and are in more than 80 smartphone models from 15 different OEMs.

In our Imaging business, we anticipate strong sequential growth, as the key new program ramps in Q3, followed by further revenue acceleration in the fourth quarter of this year.

EETimes speculates that the "key new program ramps in Q3" might mean ToF sensor in Apple iPhone 8.

SeekingAlpha publishes the earnings call transcript with a clarifying question in Q&A session:

Janardan Menon - Liberum Capital Ltd.

And just a brief follow-up on the Time-of-Flight, which is in your other division. After a big jump in the second half of last year, that revenue has sort of flattened out. But you are continuously reporting higher number of models and OEM on that particular product. And now I understand that from the second half, that revenue will increase sharply because of the 3D of the special program.

But just on the Time-of-Flight itself, can you give some reason why that revenue is not really rising as a number of model. Is that price pressure coming there? Or what are the dynamics which is happening there?

Carlo Bozotti - STMicro CEO:

I think on the Time-of-Flight, we have enormous number of customers in our end. Of course, we are also working on new technologies for the Time-of-Flight. So, there would be a new wave, but we are pretty happy that the growth is impressive in Imaging and we are investing a lot for the new initiative. This is visible of course in terms of expenses in the P&L, but we have now sort (47:46) the $300 million business of Time-of-Flight that we want to keep going and we have the opportunity. I think it's pretty good and it's a pretty good business. I would say it's very good business, but in parallel, we are investing on new things and this will make – will allow us to make another important step.

Princeton IR Tech Announces 1.2MP 95fps ITAR-free SWIR Sensor

IMVEurope, Photonics: Prinston Infrared Technologies announces its first InGaAs SWIR camera to fall under the no ITAR restrictions. The 1280SciCam, features a 1,280 x 1,024-pixel image sensor on a 12µm pitch, having long exposure times, low read noise, 14-bit digital output, and full frame rates up to 95Hz. The camera is designed for advanced scientific and astronomy applications, and is now classified by the Export Administration Regulations as EAR 6A003.b.4.a for export.

The US government’s export control has been going through a process of reform, which began in 2009 as part of the Obama Administration's Export Control Reform (ECR) initiative. The technology from Princeton Infrared no longer falls under ITAR control, which is equipment specially designed or modified for military use, but now falls under EAR. This, in theory, makes it easier to export the technology outside the USA.

Bob Struthers, sales director at Princeton Infrared Technologies, says: ‘Our 1280SciCam has already generated sales and applications with leading research entities overseas. An EAR export classification will propel our ability to serve these customers promptly and efficiently. This will be very valuable to their upcoming projects and equally beneficial to the growth of our young company.

IMVEurope: A year ago, Xenics SWIR cameras have been granted Commodity Jurisdiction (CJ) approval. This new CJ means that all SWIR cameras supplied by Xenics are now ITAR-free in the US.

Pyxalis and Framos Extend Cooperation

Presseagentur: Framos and Pyxalis extend their custom sensor design cooperation. The companies have been cooperating for several years and now have entered into a formal agreement. This partnership provides Framos partners with fully customized, high performance sensors, including sensor specification elaboration support, sensor architecture, design, prototyping, validation, industrialization and manufacturing.

We’re delighted to work with FRAMOS Technologies in Europe and North America. As a 7-year-old company supplying custom image sensors, we’ve built successful partnerships with customers in many applications from niche markets (aerospace, scientific, defense) to medium volume (industrial, medical) and consumer markets (biometrics, automotive). Thanks to this cooperation with FRAMOS, it is now time to reach a larger market and to provide our capabilities and technologies to a greater number of customers.” says Philippe Rommeveaux, PYXALIS’s President and CEO.

HDPYX Customized Sensor

Tuesday, July 25, 2017

EI Image Sensors and Imaging Systems 2017 Papers in Open Access

EI Symposium Image Sensors and Imaging Systems 2017 papers are published in open access. There is quite a lot of good papers:
  • Accurate Joint Geometric Camera Calibration of Visible and Far-Infrared Cameras
    Authors: Shibata, Takashi; Tanaka, Masayuki; Okutomi, Masatoshi
  • High Sensitivity and High Readout Speed Electron Beam Detector using Steep pn Junction Si diode for Low Acceleration Voltage
    Authors: Koda, Yasumasa; Kuroda, Rihito; Hara, Masaya; Tsunoda, Hiroyuki; Sugawa, Shigetoshi
  • A full-resolution 8K single-chip portable camera system
    Authors: Nakamura, Tomohiro; Yamasaki, Takahiro; Funatsu, Ryohei; Shimamoto, Hiroshi
  • Filter Selection for Multispectral Imaging Optimizing Spectral, Colorimetric and Image Quality
    Authors: Wang, Yixuan; Berns, Roy S.
  • The challenge of shot-noise limited speckle patterns statistical analysis
    Authors: Tualle, J.-M.; Barjean, K.; Tinet, E.; Ettori, D.
  • Hot Pixel Behavior as Pixel Size Reduces to 1 micron
    Authors: Chapman, Glenn H.; Thomas, Rahul; Koren, Israel; Koren, Zahava
  • Octagonal CMOS Image Sensor for Endoscopic Applications
    Authors: Wäny, Martin; Santos, Pedro; Reis, Elena G.; Andrade, Alice; Sousa, Ricardo M.; Sousa, L. Natércia
  • Optimization of CMOS Image Sensor Utilizing Variable Temporal Multi-Sampling Partial Transfer Technique to Achieve Full-frame High Dynamic Range with Superior Low Light and Stop Motion Capability
    Kabir, Salman; Smith, Craig; Armstrong, Frank; Barnard, Gerrit; Guidash, Michael; Vogelsang, Thomas; Endsley, Jay
  • A Lateral Electric Field charge Modulator with Bipolar-gates for Time-resolved Imaging
    Authors: Morikawa, Yuki; Yasutomi, Keita; Imanishi, Shoma; Takasawa, Taishi; Kagawa, Keiichiro; Teranishi, Nobukazu; Kawahito, Shoji
  • A 128x128, 34μm pitch, 8.9mW, 190mK NETD, TECless Uncooled IR bolometer image sensor with column-wise processing
    Authors: Alacoque, Laurent; Martin, Sébastien; Rabaud, Wilfried; Beigné, Edith; Dupret, Antoine; Dupont, Bertrand
  • Residual Bulk Image Characterization using Photon Transfer Techniques
    Author: Crisp, Richard
  • RTS and photon shot noise reduction based on maximum likelihood estimate with multi-aperture optics and semi-photon-counting-level CMOS image sensors
    Authors: Ishida, Haruki; Kagawa, Keiichiro; Seo, Min-Woong; Komuro, Takashi; Zhang, Bo; Takasawa, Taishi; Yasutomi, Keita; Kawahito, Shoji
  • Linearity analysis of a CMOS image sensor
    Authors: Wang, Fei; Theuwissen, Albert
  • Fast, Low-Complex, Non-Contact Motion Encoder based on the NSIP Concept
    Authors: Anders, Åström; Robert, Forchheimer
  • In the quest of vision-sensors-on-chip: Pre-processing sensors for data reduction
    Authors: Rodríguez-Vázquez, A.; Carmona-Galán, R.; Fernández-Berni, J.; Brea, V.; Leñero-Bardallo, J.A.

Monday, July 24, 2017

TechInsights Reviews Pixel Isolation Structures

TechInsights keeps publishing parts from Ray Fontaine's presentation at IISW 2017. The third part reviews modern pixel-to-pixel crosstalk reduction measures: Front-DTI and Back-DTI:

Sony dielectric-filled B-DTI structure from the 1.4 µm pixel featuring a 2.9 µm thick substrate extends to a depth of 1.9 µm from the back surface, although it extends to a depth of 2.4 µm deep at B-DTI intersections:

Samsung 1.12 µm pixel generation B-DTI trenches extend 1.3 µm deep into a 2.6 µm deep substrate:

Omnivision 1.0 µm pixel B-DTI extends 0.45 µm deep into the back surface of a 2.5 µm thick substrate:

Saturday, July 22, 2017

DVS Camera for Drones

Zurich University spin-off and event-driven sensor patents licensee Insightness presents its camera for drone navigation and obstacle avoidance:

Sony Unveils Variable-Speed Global Shutter Sensor

Sony publishes a flyer of IMX428LLJ/LQJ monochrome global shutter sensor featuring "variable-speed shutter function (resolution 1 H units)":

Update: There is also a faster version IMX420LLJ/LQJ achieving 200fps at 8b resolution:

Friday, July 21, 2017

Videos from AutoSens Detroit Demo Sessions

AutoSens publishes a number of short videos from its Detroit conference held in May 2017:

Why Use SWIR?

Photonics publishes Sensors Unlimited Doug Malchow presentation on SWIR band advantages:

Thursday, July 20, 2017

Forza Compares CIS Foundries and Their Offerings

Forza Silicon's President & Co-Founder, Barmak Mansoorian, compares different image sensor foundries and processes in this video:

Event-based Vision Workshop Materials On-Line

It came to my attention that the International Workshop on Event-based Vision at ICRA'17 has been held on June 2, 2017 in Singapore. The workshop materials are kindly made available on-line, including pdf presentations and videos.

The Workshop organizers have also created a very good Github-hosted list of Event Based Vision Resources.

Chronocam, ETH Zurich, Samsung are among the presenters of event driven cameras:

ETH Zurich and University of Zurich also announces Misha Award for the achievements in Neuromorphic Imaging. The 2017 Award goes to "Event-based Vision for Automomous High Speed Robotics" work by Guillermo Gallego, Elias Muggler, Henry Rebecq, Timo HorstSchafer, and Davide Scaramuzza from University of Zurich, Switzerland.

Thanks to TD and GG for the info!

Isorg and FlexEnable Win Award for Flexible Image Sensor

ALA News: Isorg announces that its first large-sized high-resolution (500 dpi) flexible plastic fingerprint sensor, co-developed with FlexEnable (former Plastic Logic), won the 2017 Best of Sensors Expo - Silver Applications Award.

The high-resolution, ultra-thin, 500 dpi flexible image sensor (sensitive from visible to near infrared) has unique advantages in performance and compactness. Its ability to conform to three-dimensional shapes sets it apart from conventional image sensors. The device provides dual detection: fingerprinting as well as vein matching. Due to its large-area sensing and high-resolution image quality, the device is suited to biometric applications from fingerprint scanners and smartcards to mobile phones, where accuracy and robustness as well as cost-competiveness are key.

Designed on a large area (3” x 3.2”; 7.62 x 8.13cm) plastic substrate, the flexible image sensor is ultra-thin (300 microns), therefore remarkably lightweight, compact and highly resistant to shock. Central to the 500 dpi flexible image sensor is an Organic Photodiode (OPD), a printed structure developed by Isorg that converts light into current – responsible for capturing the fingerprint. Isorg also developed the readout electronics, the forensics quality processing software and the optics to enable seamless integration in products. FlexEnable, the leader in developing and industrializing flexible organic electronics, developed the Organic TFT backplane technology, an alternative to amorphous silicon. This partnership between the two companies began in Q4 2013.

Tuesday, July 18, 2017

Yole IR Imaging Forum

Yole Developpement 2nd IR Imaging Forum to be held on Sept. 7 in Shenzhen, China, publishes its agenda:

  • Uncooled IR Imaging Market Perspectives
    Eric Mounier, Senior Analyst, Yole Développement
  • State of the art of High End Thermal Image Sensors performances in mass production
    Sebastien Tinnes, Marketing Manager, ULIS
  • The Status and Challenges of Thermal Imaging in Security Applications
    Guo Haixun, Product Director of Thermal Imaging, Hikvision
  • Progress on low cost Thermopile Arrays for high volume applications – eg. office automation, person detection and thermal imaging
    Joerg Schieferdecker, CEO and Co-Founder, Heimann Sensors
  • New ultra-compact infrared cameras with 500 nm spectral response for metal industry
    Torsten Czech, Head of Product Management, Optris
  • Uncooled Infrared Imaging System for Forest Fire Detection and Monitoring
    Wang You, Uncooled Infrared Imaging Senior Expert, JIR Infrared
  • Ion Beam Deposition of VOx films for uncooled bolometer and thermal sensor applications
    David I C Pearson, Ion Beam Senior Technologist, Oxford Instruments Plasma Technology
  • Modern Assembly technology for Packaging of IR Microbolometers
    Alex Voronel, Director of Global Sales, SST Vacuum Reflow Systems
  • Prospect of commercial chalcogenide glasses used for uncooled infrared imaging system
    Rongping Wang, Senior Fellow, The Australian National University
  • MOEMS components with subwavelength structures for hyperspectral imaging
    Steffen Kurth, Department manager, Fraunhofer Institute for Electronic Nano Systems (ENAS)

Image Sensors America Agenda

IS America conference to be held on October 12-13, 2017 in San Francisco has published its agenda:

  • Keynote Presentation: Lifestyle Image Sensor Requirements
    Farhad Abed, Image quality engineer of GoPro
  • Image Sensor Venture and M&A Activity: An Overview of Recent Deals, Trends, And Developments
    Rudy Berger, Managing Partner of Woodside Capital Partners
  • Image Quality Oriented Sensor Characterization
    Zhenhua Lai, Imaging Optics System Engineer of Motorola Mobility
  • A New Frontier in Optical Design: Segmented Optics Combined with Computational Imaging Algorithms
    Dmitry V. Shmunk, CTO of Almalence Inc
  • IR Bolometer Technology
    Patrick Robert, Electronic Design Manager of ULIS
  • Global Shutter vs. Rolling Shutter: Performance And Architecture Trade Off
    Abhay Rai, Director of product marketing of Sony Electronics
  • Enhancing the Spectral Sensitivity of Standard Silicon-based Imaging Detectors
    Zoran Ninkov, Professor in the Center for Imaging Science (CIS) of Rochester Institute of Technology
  • TDI Imaging Using CCD-in-CMOS Technology: An Optimal Solution for Earth Observation, Industrial Inspection and Life Sciences Applications
    Arye Lipman, Strategic Alliances Manager of Imec
  • Semiconductor Sequencing Technology: A Scalable, Low-Cost Approach to Using Integrated CMSOS Sensor Arrays
    Brian Goldstein, Sr. Staff Engineer in Sensor Design Engineering in the Clinical Next-Generation Sequencing Division of Thermo Fisher Scientific
  • Photon-to-Photon CMOS Imager: Optoelectronic 3D Integration
    Gaozhan Cai, Design team leader, focusing on designing custom CMOS image sensors of Caeleste
  • Going Beyond 2x Optical Zoom In Dual Cameras: The Future of Dual Camera Technology
    Gal Shabtay, GM and VP R&D of Corephotonics
  • Image Sensors for the Endoscopy Market: Customer Needs and Innovation Opportunities
    Dave Shafer, Managing Fellow of Intuitive Surgical
  • Will Your Next Sensor Assist in Replacing Your Job?
    Yair Siegel, Director of Strategic Marketing of CEVA
  • Enabling Always –On Machine Vision
    Evgeni Gousev, Senior Director of Qualcomm Technologies Inc.
  • PanomorphEYE Human Sight Sensor For Artificial Intelligence Revolution
    Patrice Roulet, Director of Engineering and Co-Founder of Technology of Immervision
  • High-Speed Imaging: Core Technologies and Devices Achieved
    Takashi Watanabe, Developer of log-type imagers and range image sensors of Brookman Technology, Inc.
  • Tools and Processes Needed to De-risk the Design-In of Image Sensors
    Simon Che’Rose, Head of Engineering of FRAMOS
  • Single Module Solution for Depth Mapping
  • Image Sensor Requirements for 3D Cameras
    Rich Hicks, Senior Camera and Imaging Technologist of Intel, Global Supply Management
  • Laser Diode Solutions for 3D Depth Sensing LiDAR Systems
    Tomoko Ohtsuki, Product Line Manager, Lumentum
  • A Comparison Of Depth Sensing Solutions For Image Sensors, LiDAR And Beyond
    Scott Johnson, Director of Technology Business Alignment of ON Semiconductor

ISSCC 2017 Plenary on High-Speed DNA Sequencing

High-Speed DNA Sequencing is an emerging application for image sensor and sister devices (such as ion sensors, pH sensors, etc.). The ion sensor part starts at about 15:00 time in this ISSCC 2017 plenary session video by Jonathan Rothberg, Yale University:

Monday, July 17, 2017

ICFO Graphene Image Sensor Video

Circuit Cellar publishes a nice interview with Stijn Goossens, one of ICFO developers of graphene image sensor announced in May:

Mobile Phone Food Analysis

Open-source Sensors journal publishes a paper "Smartphone-Based Food Diagnostic Technologies: A Review" by Giovanni Rateni, Paolo Dario, and Filippo Cavallo from BioRobotics Institute, Italy. Smartphone with image sensor turns to be quite a versatile platform:

CIS History Diagram

Techbriefs magazine publishes an article "CMOS, The Future of Image Sensor Technology" by Gareth Power, Marketing Manager, Teledyne e2v. The main trends in industrial and scientific sensors are said to be higher speeds and lower prices. There is also a diagram on image sensor companies spin-offs and mergers:

Some parts are not exactly correct here, like Avago has not been spun-off from Micron. Also, Far Eastern companies are not there, like no Toshiba-Sony, nor Siliconfile-Hynix, nor others. But as a first attempt to make such a diagram, it looks really nice.

Thanks to LH for the link!

Sunday, July 16, 2017

Optimal Coding Functions for I-ToF Imaging

University of Wisconsin-Madison and Columbia University publish a technical report "What Are Optimal Coding Functions for Time-of-Flight Imaging?" by Mohit Gupta, Andreas Velten, Shree Nayar, and Eric Breitach.

"Almost all current C-ToF systems use sinusoid or square coding functions, resulting in a limited depth resolution. In this paper, we present a mathematical framework for exploring and characterizing the space of C-ToF coding functions in a geometrically intuitive space. Using this framework, we design families of novel coding functions that are based on Hamiltonian cycles on hypercube graphs. Given a fixed total source power and acquisition time, the new Hamiltonian coding scheme can achieve up to an order of magnitude higher resolution as compared to the current state-of-the art methods, especially in low SNR settings."

The "geometrically intuitive hypercube graphs" look like this: