Robust and Fast Decoding of High-Capacity Color QR Codes for Mobile Applications

 

Abstract

 

The use of color in QR codes brings extra data  capacity, but also inflicts tremendous challenges on the decodingprocess due to chromatic distortion—cross-channel color interference and illumination variation. Particularly, we further discover a new type of chromatic distortion in high-density color QR codes—cross-module color interference—caused by the high density which also makes the geometric distortion correction more challenging. To address these problems, we propose two approaches, LSVM-CMI and QDA-CMI, which jointly model these different types of chromatic distortion. Extended from SVM and QDA, respectively, both LSVM-CMI and QDA-CMI optimizeover a particular objective function and learn a color classifier. Furthermore, a robust geometric transformation method and several pipeline refinements are proposed to boost the decoding performance for mobile applications. We put forth and implement a framework for high-capacity color QR codes equipped  with our methods, called HiQ. To evaluate the performance of HiQ, we collect a challenging large-scale color QR code dataset, CUHK-CQRC, which consists of 5390 high-density color QR code samples. The comparison with the baseline method [2] on CUHK-CQRC shows that HiQ at least outperforms [2] by 188% in decoding success rate and 60% in bit error rate. Our implementation of HiQ in iOS and Android also demonstrates the effectiveness of our framework in real-world applications.

 

Existing System

 

Recent years have seen numerous attempts on using color to increase the capacity of traditional 2D barcodes. Recent projects like COBRA, Strata and FOCUS support visual light communications by streaming a sequence of 2D barcodes from a display to the camera of the receiving smartphone. However, the scope of their work is different from ours. They focus on designing new 2D (color or monochrome) barcode systems that are robust for message streaming (via video sequences) between relatively large smartphone screens (or other displays) and the capturing camera H. Bagherinia and R. Manduchi] propose to model color variation under various illuminations using a low-dimensional subspace, e.g., principal component analysis, without requiring reference color patches. T. Shimizu et. al. propose a 64-color 2D barcode and augment the RGB color space using seed colors which functions as references to facilitate color classification.

 

Proposed System

 

The work focuses on tackling the critical challenges such as CMI and CCI to support fast and robust decoding when dense color QR codes are printed on paper substrates with maximal data-capacity-per-unit-area ratio. our proposed HiQ framework addresses the aforementioned limitations in a comprehensive manner. On the encoding side, HiQ differs from HCC2D in that HiQ codes do not add extra reference symbols around the color QR codes; and the color QR codes generation of PCCC framework is a special case of HiQ, namely, 3-layer HiQ codes. On the decoding side, the differences mainly lie in geometric distortion correction and color recovery. HiQ adopts offline learning, and thus does not rely on the specially designed reference color for training the color recovery model as HCC2D and PCCC do. More importantly, by using RGT and QDA-CMI (or LSVMCMI),HiQ addresses the problem of geometric and chromatic distortion particularly for high-density color QR codes which are not considered by HCC2D or PCCC.

 

CONCLUSION

 

In this paper, we have proposed two methods that jointly model different types of chromatic distortion (cross-channel color interference and illumination variation) together with newly discovered chromatic distortion, cross-module color interference, for high-density color QR codes. A robust geometric transformation method is developed to address the challenge of geometric distortion. Besides, we have presented a framework for high-capacity color QR codes, HiQ, which enables users and developers to create generalized QR codes with flexible and broader range of choices of data capacity, error correction and color, etc. To evaluate the proposed approach, we have collected the first large-scale color QR code dataset, CUHK-CQRC. Experimental results have shown substantial advantages of the HiQ over the baseline approach. Our implementation of HiQ on both Android and iOS and evaluation using off-the-shelf smartphones have demonstrated its usability and effectiveness in real-world practice. In the future, as opposed to current design where error correction is performed layer by layer, a new mechanism will be developed to share correction capacity across layers by constructing error correction codes and performing correction for all layers as a whole, by which we think the robustness of our color QR code system will be further improved.

 

REFERENCES

[1] Z. Yang, Z. Cheng, C. C. Loy, W. C. Lau, C. M. Li, and G. Li, “Towards robust color recovery for high-capacity color qr codes,” in Proc. IEEE Int. Conf. Image Process. (ICIP), Sept. 2016, pp. 2866–2870.

[2] H. Blasinski, O. Bulan, and G. Sharma, “Per-colorant-channel color barcodes for mobile applications: An interference cancellation framework,”IEEE Trans. Image Process., vol. 22, no. 4, pp. 1498–1511, Apr. 2013.

[3] Y. Liu, J. Yang, and M. Liu, “Recognition of QR code with mobile phones,” in Control and Decision Conference, 2008. CCDC 2008. Chinese. IEEE, 2008, pp. 203–206.

[4] C. M. Li, P. Hu, and W. C. Lau, “Authpaper: Protecting paper-based documents and credentials using authenticated 2D barcodes,” in IEEE Int. Conf. Commun. (ICC), Jun. 2015, pp. 7400–7406.

[5] R. Hartley and A. Zisserman, Multiple view geometry in Computer Vision. Cambridge university press, 2003.

[6] A. Gijsenij, T. Gevers, and J. Van De Weijer, “Improving color constancy by photometric edge weighting,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 34, no. 5, pp. 918–929, May 2012.

[7] A. Grillo, A. Lentini, M. Querini, and G. F. Italiano, “High capacity colored two dimensional codes,” in Proc. IEEE Int. Multiconf. Comput. Sci. Inf. Technol., Oct. 2010, pp. 709–716.

[8] H. Kato, K. T. Tan, and D. Chai, “Novel colour selection scheme for 2D barcode,” in Proc. IEEE Int. Symp. Intell. Signal Process. Commun.Syst., Jan. 2009, pp. 529–532.

[9] T. Onoda and K. Miwa, “Hierarchised two-dimensional code, creation method thereof, and read method thereof,” available at Japan Patent Office, vol. 213336, 2005.

[10] D. Parikh and G. Jancke, “Localization and segmentation of a 2D high capacity color barcode,” in IEEE Workshop Appl. of Comput. Vis. (WACV), Jan. 2008, pp. 1–6.

[11] M. Querini and G. F. Italiano, “Color classifiers for 2D color barcodes,” in Proc. IEEE Fed. Conf. Comput. Sci. and Inf. Syst., Sept. 2013, pp.611–618.

[12] C. Chen, W. Huang, B. Zhou, C. Liu, and W. H. Mow, “Picode: A new picture-embedding 2d barcode,” IEEE Trans. Image Process., vol. 25, no. 8, pp. 3444–3458, Aug. 2016.

[13] T. Hao, R. Zhou, and G. Xing, “Cobra: color barcode streaming for smartphone systems,” in Proc. ACM 10th Int. Conf. Mobile syst., appl., serv. (MobiSys), Jun. 2012, pp. 85–98.

[14] W. Hu, J. Mao, Z. Huang, Y. Xue, J. She, K. Bian, and G. Shen, “Strata: layered coding for scalable visual communication,” in Proc. ACM 20th annual Int. Conf. Mobile comput. netw. (MobiCom), Sept. 2014, pp. 79–90.

[15] F. Hermans, L. McNamara, G. Sörös, C. Rohner, T. Voigt, and E. Ngai, “Focus: Robust visual codes for everyone,” in Proc. ACM 14th Annual Int. Conf. Mobile syst., appl., serv. (MobiSys), Jun. 2016, pp. 319–332.

[16] H. Bagherinia and R. Manduchi, “A theory of color barcodes,” in IEEE Int. Conf. Comput. Vis. Workshops (ICCV Workshops), Nov. 2011, pp. 806–813.

[17] T. Shimizu, M. Isami, K. Terada, W. Ohyama, and F. Kimura, “Color recognition by extended color space method for 64-color 2-d barcode.” in MVA, Jan. 2011, pp. 259–262.

[18] H. Bagherinia and R. Manduchi, “A novel approach for color barcode decoding using smart phones,” in Proc. IEEE Int. Conf. Image Process., Oct. 2014, pp. 2556–2559.

[19] C.-W. Hsu and C.-J. Lin, “A comparison of methods for multiclass support vector machines,” IEEE Trans. Neural Netw., vol. 13, no. 2, pp. 415–425, Mar. 2002.

[20] L. Simonot and M. Hébert, “Between additive and subtractive color mixings: intermediate mixing models,” J. Opt. Soc. Am. A Opt. Image Sci. Vis., vol. 31, no. 1, pp. 58–66, Jan. 2014.

[21] R. Tibshirani, M. Saunders, S. Rosset, J. Zhu, and K. Knight, “Sparsity  and smoothness via the fused lasso,” Journal of the Royal Statistical Society: Series B (Statistical Methodology), vol. 67, no. 1, 2005.