Computational Photography ICIP 2016 List of references

M. Gupta and J.-F. Lalonde Comp. Photography Tutorial @ ICIP 2016—List of references Computational Photography Tutorial @ ICIP 2016 List of referenc...
Author: Sabrina Walsh
6 downloads 0 Views 142KB Size
M. Gupta and J.-F. Lalonde

Comp. Photography Tutorial @ ICIP 2016—List of references

Computational Photography Tutorial @ ICIP 2016 List of references Mohit Gupta and Jean-Fran¸cois Lalonde

1

Coded photography

Object Side Coding [1] H. Du, X. Tong, X. Cao, and S. Lin. “A prism-based system for multispectral video acquisition”. In: Computer Vision, 2009 IEEE 12th International Conference on. Sept. 2009, pp. 175–182. [2] T. Georgeiv, K. C. Zheng, B. Curless, D. Salesin, S. K. Nayar, and C. Intwala. “Spatio-Angular Resolution Tradeoff in Integral Photography”. In: In Eurographics Symposium on Rendering. 2006, pp. 263–272. [3] S. Kuthirummal and S. K. Nayar. “Multiview Radial Catadioptric Imaging for Scene Capture”. In: ACM Trans. Graph. 25.3 (July 2006), pp. 916–923. [4] R. Raskar, A. Agrawal, and J. Tumblin. “Coded Exposure Photography: Motion Deblurring Using Fluttered Shutter”. In: ACM Trans. Graph. 25.3 (July 2006), pp. 795–804. [5] Y. Schechner and S. Nayar. “Generalized Mosaicing: Polarization Panorama”. In: IEEE Transactions on Pattern Analysis and Machine Intelligence 27.4 (Apr. 2005), pp. 631–636. [6] Y. Schechner and N. Karpel. “Clear underwater vision”. In: Computer Vision and Pattern Recognition, 2004. CVPR 2004. Proceedings of the 2004 IEEE Computer Society Conference on. Vol. 1. June 2004, [7] Y. Schechner and S. Nayar. “Generalized Mosaicing: High Dynamic Range in a Wide Field of View”. In: International Journal on Computer Vision 53.3 (July 2003), pp. 245–267. [8] Y. Schechner and S. Nayar. “Generalized Mosaicing: Wide Field of View Multispectral Imaging”. In: IEEE Transactions on Pattern Analysis and Machine Intelligence 24.10 (Oct. 2002), pp. 1334–1348. [9] J. Gluckman and S. Nayar. “Rectified catadioptric stereo sensors”. In: Computer Vision and Pattern Recognition, 2000. Proceedings. IEEE Conference on. Vol. 2. 2000, 380–387 vol.2. [10] D. H. Lee, I. S. Kweon, and R. Cipollaa. “Single Lens Stereo with a Biprism”. In: Proceedings of the IAPR International Workshop on Machine Vision and Applications. 1998, pp. 136–139. [11] J. S. Chahl and M. V. Srinivasan. “Reflective surfaces for panoramic imaging”. In: Appl. Opt. 36.31 (Nov. 1997), pp. 8275–8285. [12] S. Peleg and J. Herman. “Panoramic mosaics by manifold projection”. In: Computer Vision and Pattern Recognition, 1997. Proceedings., 1997 IEEE Computer Society Conference on. June 1997, pp. 338–343. [13] K. Yamazawa, Y. Yagi, and M. Yachida. “Omnidirectional imaging with hyperboloidal projection”. In: Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems. Vol. 2. July 1993, 1029–1034 vol.2. 1

M. Gupta and J.-F. Lalonde

Comp. Photography Tutorial @ ICIP 2016—List of references

[14] L. B. Wolff and T. E. Boult. “Constraining object features using a polarization reflectance model”. In: Pattern Analysis and Machine Intelligence, IEEE Transactions on 13.7 (July 1991), pp. 635–657.

Pupil (Aperture) Plane Coding [1] O. Cossairt, C. Zhou, and S. Nayar. “Diffusion Coded Photography for Extended Depth of Field”. In: ACM Trans. Graph. 29.4 (July 2010), 31:1–31:10. [2] A. Levin, S. W. Hasinoff, P. Green, F. Durand, and W. T. Freeman. “4D Frequency Analysis of Computational Cameras for Depth of Field Extension”. In: ACM Trans. Graph. 28.3 (July 2009), 97:1–97:14. [3] Y. Bando, B.-Y. Chen, and T. Nishita. “Extracting Depth and Matte Using a Color-filtered Aperture”. In: ACM Trans. Graph. 27.5 (Dec. 2008), 134:1–134:9. [4] C.-K. Liang, T.-H. Lin, B.-Y. Wong, C. Liu, and H. H. Chen. “Programmable Aperture Photography: Multiplexed Light Field Acquisition”. In: ACM Trans. Graph. 27.3 (Aug. 2008), 55:1–55:10. [5] A. Levin, R. Fergus, F. Durand, and W. T. Freeman. “Image and Depth from a Conventional Camera with a Coded Aperture”. In: ACM Trans. Graph. 26.3 (July 2007). [6] S. W. Hasinoff and K. N. Kutulakos. “Confocal stereo”. In: In Proc. ECCV. Springer, 2006, pp. 620–634. [7] M. Aggarwal and N. Ahuja. “Split aperture imaging for high dynamic range”. In: Computer Vision, 2001. ICCV 2001. Proceedings. Eighth IEEE International Conference on. Vol. 2. 2001, 10–17 vol.2. [8] E. R. Dowski and W. T. Cathey. “Extended depth of field through wave-front coding”. In: Appl. Opt. 34.11 (Apr. 1995), pp. 1859–1866. [9] A. P. Pentland. “A New Sense for Depth of Field”. In: Pattern Analysis and Machine Intelligence, IEEE Transactions on PAMI-9.4 (July 1987), pp. 523–531.

Focal (Image) Plane Coding [1] G. Bub, M. Tecza, M. Helmes, P. Lee1, and P. Kohl. “Temporal pixel multiplexing for simultaneous high-speed, high-resolution imaging”. In: Nature Methods 7 (2010). [2] J. Gu, Y. Hitomi, T. Mitsunaga, and S. Nayar. “Coded Rolling Shutter Photography: Flexible Space-Time Sampling”. In: IEEE International Conference on Computational Photography (ICCP). Mar. 2010. [3] M. Gupta, A. Agrawal, A. Veeraraghavan, and S. G. Narasimhan. “Flexible Voxels for MotionAware Videography”. In: Proc. European Conference on COmputer Vision. 2010. [4] S. Kuthirummal, H. Nagahara, C. Zhou, and S. Nayar. “Flexible Depth of Field Photography”. In: IEEE Transactions on Pattern Analysis and Machine Intelligence 99 (Mar. 2010). [5] H. Nagahara, S. Kuthirummal, C. Zhou, and S. Nayar. “Flexible Depth of Field Photography”. In: European Conference on Computer Vision (ECCV). Oct. 2008. 2

M. Gupta and J.-F. Lalonde

Comp. Photography Tutorial @ ICIP 2016—List of references

[6] A. Veeraraghavan, R. Raskar, A. Agrawal, A. Mohan, and J. Tumblin. “Dappled Photography: Mask Enhanced Cameras for Heterodyned Light Fields and Coded Aperture Refocusing”. In: ACM Trans. Graph. 26.3 (July 2007). [7] S. G. Narasimhan and S. K. Nayar. “Enhancing Resolution Along Multiple Imaging Dimensions Using Assorted Pixels”. In: IEEE Trans. Pattern Anal. Mach. Intell. 27.4 (Apr. 2005), pp. 518– 530. [8] R. Ng, M. Levoy, M. Bredif, G. Duval, M. Horowitz, and P. Hanrahan. “Light Field Photography with a Hand-held Plenoptic Camera”. In: (Technical Report CSTR 2005-02), Stanford, CA: Stanford Computer Science Department (2005). [9] T. Naemura, T. Yoshida, and H. Harashima. “3-D computer graphics based on integral photography”. In: Opt. Express 8.4 (Feb. 2001), pp. 255–262. [10] E. Adelson and J. Wang. “Single lens stereo with a plenoptic camera”. In: Pattern Analysis and Machine Intelligence, IEEE Transactions on 14.2 (Feb. 1992), pp. 99–106. [11] G. Hausler. “A method to increase the depth of focus by two step image processing”. In: Optics Communications (1972), pp. 38–42.

Illumination Coding [1] Leica-Geosystems. Pulsed LIDAR Sensor. http://www.leica-geosystems.us/en/index. htm. [2] Velodyne. Pulsed LIDAR Sensor. http://www.velodynelidar.com/lidar/lidar.aspx. [3] M. Gupta, S. K. Nayar, M. Hullin, and J. Martin. “Phasor Imaging: A Generalization of Correlation Based Time-of-Flight Imaging”. In: ACM Transactions on Graphics (2015). [4] N. Matsuda, O. Cossairt, and M. Gupta. “MC3D: Motion Contrast 3D Scanning”. In: IEEE International Conference on Computational Photography. 2015. [5] M. Gupta, A. Agrawal, A. Veeraraghavan, and S. G. Narasimhan. “A Practical Approach to 3D Scanning in the Presence of Interreflections, Subsurface Scattering and Defocus”. In: International Journal of Computer Vision 102.1-3 (2013), pp. 33–55. [6] M. Gupta and S. K. Nayar. “Micro Phase Shifting”. In: Proc. IEEE CVPR. 2012. [7] M. Gupta, Y. Tian, S. Narasimhan, and L. Zhang. “A Combined Theory of Defocused Illumination and Global Light Transport”. In: International Journal of Computer Vision 98.2 (2012), pp. 146–167. [8] A. Velten, T. Willwacher, O. Gupta, A. Veeraraghavan, M. G. Bawendi, and R. Raskar. “Recovering three-dimensional shape around a corner using ultrafast time-of-flight imaging”. In: Nature 3 (745) (2012). [9] J. Salvi, S. Fernandez, T. Pribanic, and X. Llado. “A state of the art in structured light patterns for surface profilometry”. In: Pattern Recognition 43.8 (2010), pp. 2666–2680. [10] S. Zhang, D. V. D. Weide, and J. Oliver. “Superfast phase-shifting method for 3-D shape measurement”. In: Opt. Express 18.9 (2010), pp. 9684–9689. [11] A. Kirmani, T. Hutchison, J. Davis, and R. Raskar. “Looking around the corner using transient imaging”. In: IEEE ICCV. 2009. 3

M. Gupta and J.-F. Lalonde

Comp. Photography Tutorial @ ICIP 2016—List of references

[12] D. Lanman, D. Crispell, and G. Taubin. “Surround Structured Lighting: 3-D Scanning with Orthographic Illumination”. In: Comput. Vis. Image Underst. 113.11 (2009), pp. 1107–1117. [13] L. Zhang and S. Nayar. “Projection Defocus Analysis for Scene Capture and Image Display”. In: ACM Trans. Graph. 25.3 (2006), pp. 907–915. [14] J. Salvi, J. Pagfffdfffdfffds, and J. Batlle. “Pattern codification strategies in structured light systems”. In: Pattern Recognition 37.4 (2004), pp. 827–849. [15] D. Scharstein and R. Szeliski. “High-accuracy stereo depth maps using structured light”. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. Vol. 1. 2003, [16] L. Zhang, B. Curless, and S. M. Seitz. “Spacetime Stereo: Shape Recovery for Dynamic Scenes”. In: IEEE Conference on Computer Vision and Pattern Recognition. 2003, pp. 367–374. [17] S. Rusinkiewicz, O. Hall-Holt, and M. Levoy. “Real-time 3D Model Acquisition”. In: ACM Trans. Graph. 21.3 (2002), pp. 438–446. [18] R. Lange and P. Seitz. “Solid-State time-of-flight range camera”. In: IEEE J. Quantum Electronics 37.3 (2001). [19] R. Lange. “3D time-of-flight distance measurement with custom solid-state image sensors in CMOS-CCD-technology”. In: PhD Thesis (2000). [20] M. Levoy, K. Pulli, B. Curless, S. Rusinkiewicz, D. Koller, L. Pereira, M. Ginzton, S. Anderson, J. Davis, J. Ginsberg, J. Shade, and D. Fulk. “The Digital Michelangelo Project: 3D Scanning of Large Statues”. In: SIGGRAPH. 2000, pp. 131–144. [21] J. .-.-Y. Bouguet and P. Perona. “3D photography on your desk”. In: Proc. IEEE International Conference on Computer Vision. 1998, pp. 43–50. [22] E. Horn and N. Kiryati. “Toward optimal structured light patterns”. In: International Conference on Recent Advances in 3-D Digital Imaging and Modeling. 1997, pp. 28–35. [23] T. Kanade, A. Gruss, and L. Carley. “A very fast VLSI rangefinder”. In: IEEE International Conference on Robotics and Automation. 1991, pp. 1322–1329. [24] I. Moring, T. Heikkinen, R. Myllyla, and A. Kilpela. “Acquisition Of Three-Dimensional Image Data By A Scanning Laser Range Finder”. In: Optical Engineering 28.8 (1989). [25] K. Sato and S. Inokuchi. “3D surface measurement by space encoding range imaging”. In: Journal of Robotic Systems 2.1 (1985), pp. 27–39. [26] T. C. Strand. “Optical Three-Dimensional Sensing For Machine Vision”. In: Optical Engineering 24.1 (1985). [27] S. Inokuchi, K. Sato, and F. Matsuda. “Range imaging system for 3-D object recognition”. In: International Conference Pattern Recognition. 1984, pp. 806–808. [28] J. L. Posdamer and M. D. Altschuler. “Surface measurement by space-encoded projected beam systems”. In: Computer Graphics and Image Processing 18.1 (1982), pp. 1–17. [29] D. E. Smith. “Electronic Distance Measurement for Industrial and Scientific Applications”. In: Hewlett-Packard Journal 31.6 (1980). [30] G. Mamon, D. G. Youmans, Z. G. Sztankay, and C. E. Mongan. “Pulsed GaAs laser terrain profiler”. In: Appl. Opt. 17.6 (1978), pp. 868–877. [31] G. J. Agin and T. O. Binford. “Computer Description of Curved Objects”. In: IEEE Trans. Comput. 25.4 (1976), pp. 439–449. 4

M. Gupta and J.-F. Lalonde

Comp. Photography Tutorial @ ICIP 2016—List of references

[32] J. M. Payne. “An Optical Distance Measuring Instrument”. In: Review of Scientific Instruments 44.3 (1973), pp. 304–306. [33] P. M. Will and K. S. Pennington. “Grid coding: A novel technique for image processing”. In: Proceedings of the IEEE 60.6 (1972), pp. 669–680. [34] Y. Shirai and M. Suwa. “Recognition of Polyhedrons with a Range Finder”. In: Proceedings of the International Joint Conference on Artificial Intelligence. 1971, pp. 80–87. [35] W. Koechner. “Optical ranging system employing a high power injection laser diode”. In: IEEE Trans. aerospace and electronic systems 4.1 (1968). [36] B. S. Goldstein and G. F. Dalrymple. “Gallium arsenide injection laser radar”. In: Proc. of the IEEE 55.2 (1967), pp. 181–188.

Surveys [1] S. K. Nayar. Computational Cameras: Approaches, Benefits and Limits. Tech. rep. Jan. 2011. [2] C. Zhou and S. K. Nayar. “Computational Cameras: Convergence of Optics and Processing”. In: IEEE Transactions on Image Processing 20.12 (Dec. 2011), pp. 3322–3340. [3] S. K. Nayar. “Computational Cameras: Redefining the Image”. In: IEEE Computer Magazine, Special Issue on Computational Photography (Aug. 2006), pp. 30–38. [4] F. Blais. “Review of 20 years of range sensor development”. In: Journal of Electronic Imaging 13.1 (2004), pp. 231–243. [5] P. Besl. “Active, optical range imaging sensors”. In: Machine Vision and Applications 1.2 (1988), pp. 127–152. [6] R. A. Jarvis. “A Perspective on Range Finding Techniques for Computer Vision”. In: IEEE Transactions on Pattern Analysis and Machine Intelligence 5.2 (1983), pp. 122–139.

5

M. Gupta and J.-F. Lalonde

2

Comp. Photography Tutorial @ ICIP 2016—List of references

Augmented photography

Inverting the imaging pipeline [1] A. Mosleh, P. Green, E. Onzon, I. Begin, and P. Langlois. “Camera Intrinsic Blur Kernel Estimation: A Reliable Framework”. In: IEEE Conference on Computer Vision and Pattern Recognition. 2015. [2] F. Heide, K. Egiazarian, J. Kautz, K. Pulli, M. Steinberger, Y.-T. Tsai, M. Rouf, D. Pajfffdfffdk, D. Reddy, O. Gallo, J. Liu, and W. Heidrich. “FlexISP: a flexible camera image processing framework”. In: ACM Transactions on Graphics 33.6 (Nov. 2014), pp. 1–13. [3] F. Heide, M. Rouf, M. B. Hullin, B. Labitzke, W. Heidrich, and A. Kolb. “High-quality computational imaging through simple lenses”. In: ACM Transactions on Graphics 32.5 (2013), pp. 1–14. [4] C. J. Schuler, M. Hirsch, S. Harmeling, and B. Scholkopf. “Non-stationary correction of optical aberrations”. In: International Conference on Computer Vision. IEEE, Nov. 2011, pp. 659–666. [5] L. Zhang, X. Wu, and A. Buades. “Color demosaicking by local directional interpolation and nonlocal adaptive thresholding”. In: Journal of Electronic Imaging 20.2 (Apr. 2011). [6] K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian. “Image denoising by sparse 3D transformationdomain collaborative filtering”. In: IEEE Transactions on Image Processing 16.8 (2007), pp. 1– 16. [7] A. Levin, R. Fergus, F. Durand, and W. T. Freeman. “Image and depth from a conventional camera with a coded aperture”. In: ACM Transactions on Graphics 26.3 (July 2007), p. 70. [8] Y. Weiss and W. Freeman. “What makes a good model of natural images?” In: 2007 IEEE Conference on Computer Vision and Pattern Recognition. 2007. [9] B. A. Olshausen and D. J. Field. “Wavelet-like receptive elds emerge from a network that learns sparse codes for natural images.” In: Nature (1996), pp. 1–11.

Burst photography [1] S. W. Hasinoff, J. T. Barron, and A. Adams. “Burst photography for high dynamic range and low-light imaging on mobile cameras”. In: ACM Transactions on Graphics (SIGGRAPH Asia) (2016). [2] M. Delbracio and G. Sapiro. “Removing Camera Shake via Weighted Fourier Burst Accumulation”. In: IEEE Transactions on Image Processing 24.11 (Nov. 2015), pp. 3293–3307. [3] A. Ito, S. Tambe, K. Mitra, A. C. Sankaranarayanan, and A. Veeraraghavan. “Compressive epsilon photography for post-capture control in digital imaging”. In: ACM Transactions on Graphics 33.4 (July 2014), pp. 1–12. [4] Z. Liu, L. Yuan, X. Tang, M. Uyttendaele, and J. Sun. “Fast burst images denoising”. In: ACM Transactions on Graphics 33.6 (Nov. 2014), pp. 1–9. [5] S. H. Park and M. Levoy. “Gyro-Based Multi-image Deconvolution for Removing Handshake Blur”. In: IEEE Conference on Computer Vision and Pattern Recognition. IEEE, June 2014, pp. 3366–3373. 6

M. Gupta and J.-F. Lalonde

Comp. Photography Tutorial @ ICIP 2016—List of references

[6] E. Ringaby and P.-E. Forss´en. “A virtual tripod for hand-held video stacking on smartphones”. In: IEEE International Conference on Computational Photography. IEEE, May 2014, pp. 1–9. [7] M. Granados, B. Ajdin, M. Wand, C. Theobalt, H. P. Seidel, and H. P. a. Lensch. “Optimal HDR reconstruction with linear digital cameras”. In: IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 2010, pp. 215–222. [8] N. Joshi and M. F. Cohen. “Seeing Mt. Rainier: Lucky Imaging for multi-image denoising, sharpening, and haze removal”. In: IEEE International Conference on Computational Photography. 2010. [9] D. G. Lowe. “Distinctive Image Features from scale-invariant keypoints”. In: International Journal of Computer Vision 60.2 (2004), pp. 91–110. [10] J. R. Janesick, T. Elliott, S. Collins, M. M. Blouke, and J. Freeman. “Scientific Charge-Coupled Devices”. In: Optical Engineering 26.8 (1987), p. 268692.

Advanced image editing [1] C. Barnes, F.-l. Zhang, L. Lou, X. Wu, and S.-m. Hu. “PatchTable : Efficient Patch Queries for Large Datasets and Applications”. In: ACM Transactions on Graphics (2015). [2] J. T. Barron, A. Adams, Y. Shih, and C. Hernandez. “Fast Bilateral-Space Stereo for Synthetic Defocus”. In: IEEE Conference on Computer Vision and Pattern Recognition. 2015. [3] M.-M. Cheng, G.-X. Zhang, N. J. Mitra, X. Huang, and S.-M. Hu. “Global contrast based salient region detection”. In: IEEE Transactions on Pattern Analysis and Machine Intelligence 37.3 (2015), pp. 569–582. [4] O. Fried, E. Shechtman, D. B. Goldman, and A. Finkelstein. “Finding Distractors In Images”. In: IEEE Conference on Computer Vision and Pattern Recognition. 2015. [5] R. Jaroensri, S. Paris, A. Hertzmann, V. Bychkovsky, and F. Durand. “Predicting Range of Acceptable Photographic Tonal Adjustments”. In: IEEE International Conference on Computational Photography. 2015. [6] T. Xue, M. Rubinstein, C. Liu, and W. T. Freeman. “A computational approach for obstructionfree photography”. In: ACM Transactions on Graphics 34.4 (July 2015), 79:1–79:11. [7] C. Fang, Z. Lin, R. Mech, and X. Shen. “Automatic Image Cropping using Visual Composition, Boundary Simplicity and Content Preservation Models”. In: Proceedings of the ACM International Conference on Multimedia. New York, New York, USA: ACM Press, Nov. 2014, pp. 1105–1108. [8] M. Son, Y. Lee, H. Kang, and S. Lee. “Art-Photographic Detail Enhancement”. In: Computer Graphics Forum 33.2 (2014), pp. 391–400. [9] M. Wang, Y.-K. Lai, Y. Liang, R. R. Martin, and S.-M. Hu. “BiggerPicture: data-driven image extrapolation using graph matching”. In: ACM Transactions on Graphics 33.6 (Nov. 2014), pp. 1–13. [10] K. Yamaguchi, D. Mcallester, and R. Urtasun. “Efficient Joint Segmentation, Occlusion Labeling, Stereo and Flow Estimation”. In: European Conference on Computer Vision. 2014, pp. 1–16. 7

M. Gupta and J.-F. Lalonde

Comp. Photography Tutorial @ ICIP 2016—List of references

[11] J. Yan, S. Lin, S. B. Kang, and X. Tang. “A Learning-to-Rank Approach for Image Color Enhancement”. In: IEEE Conference on Computer Vision and Pattern Recognition. IEEE, June 2014, pp. 2987–2994. [12] J. Baek, D. Pajak, K. Kim, K. Pulli, and M. Levoy. “WYSIWYG Computational Photography via Viewfinder Editing”. In: ACM Transactions on Graphics 32.6 (2013), 198:1–198:10. [13] N. K. Kalantari, E. Shechtman, C. Barnes, S. Darabi, D. B. Goldman, and P. Sen. “Patch-based high dynamic range video”. In: ACM Transactions on Graphics 32.6 (2013), pp. 1–8. [14] R. Margolin, A. Tal, and L. Zelnik-Manor. “What makes a patch distinct?” In: IEEE Conference on Computer Vision and Pattern Recognition. 2013, pp. 1139–1146. [15] J. Yan, S. Lin, S. B. Kang, and X. Tang. “Learning the Change for Automatic Image Cropping”. In: IEEE Conference on Computer Vision and Pattern Recognition. IEEE, June 2013, pp. 971– 978. [16] L. Kaufman, D. Lischinski, and M. Werman. “Content-Aware Automatic Photo Enhancement”. In: Computer Graphics Forum 31.8 (Dec. 2012), pp. 2528–2540. [17] P. Sen, N. K. Kalantari, M. Yaesoubi, S. Darabi, D. B. Goldman, and E. Shechtman. “Robust patch-based HDR reconstruction of dynamic scenes”. In: ACM Transactions on Graphics 31.6 (2012), p. 1. [18] Y. HaCohen, E. Shechtman, D. B. Goldman, and D. Lischinski. “Non-rigid dense correspondence with applications for image enhancement”. In: ACM Transactions on Graphics 30.4 (2011), p. 1. [19] B. Wang, Y. Yu, and Y.-Q. Xu. “Example-based image color and tone style enhancement”. In: ACM Transactions on Graphics 30.4 (Aug. 2011), p. 1. [20] A. Adams, J. Baek, and M. A. Davis. “Fast high-dimensional filtering using the permutohedral lattice”. In: Computer Graphics Forum 29.2 (2010), pp. 753–762. [21] C. Barnes, E. Shechtman, D. B. Goldman, and A. Finkelstein. “The generalized PatchMatch correspondence algorithm”. 2010. [22] C. Barnes, E. Shechtman, A. Finkelstein, and D. B. Goldman. “PatchMatch: A Randomized Correspondence Algorithm for Structural Image Editing”. In: ACM Transactions on Graphics 28.3 (2009), p. 1. [23] T. Judd, K. Ehinger, F. Durand, and A. Torralba. “Learning to predict where humans look”. In: IEEE International Conference on Computer Vision (2009), pp. 2106–2113. [24] A. Buades, B. Coll, and J.-M. Morel. “A non-local algorithm for image denoising”. In: IEEE Conference on Computer Vision and Pattern Recognition. Vol. 2. 2005.

2D image, 3D scene [1] K. Karsch, K. Sunkavalli, S. Hadap, N. Carr, H. Jin, R. Fonte, M. Sittig, and D. Forsyth. “Automatic Scene Inference for 3D Object Compositing”. In: ACM Transactions on Graphics 33.3 (2014), 32:1–32:15. [2] N. Kholgade, T. Simon, A. Efros, and Y. Sheikh. “3D object manipulation in a single photograph using stock 3D models”. In: ACM Transactions on Graphics 33.4 (2014), 127:1–127:12. 8

M. Gupta and J.-F. Lalonde

Comp. Photography Tutorial @ ICIP 2016—List of references

[3] J.-F. Lalonde and I. Matthews. “Lighting Estimation in Outdoor Image Collections”. In: International Conference on 3D Vision. 2014. [4] T. Chen, Z. Zhu, A. Shamir, S.-M. Hu, and D. Cohen-Or. “3-Sweep: extracting editable objects from a single photo”. In: ACM Transactions on Graphics 32.6 (2013), 195:1–195:10. [5] J. F. Lalonde, A. a. Efros, and S. G. Narasimhan. “Estimating the natural illumination conditions from a single outdoor image”. In: International Journal of Computer Vision 98.2 (2012), pp. 123–145. [6] Y. Zheng, X. Chen, M.-M. Cheng, K. Zhou, S.-M. Hu, and N. J. Mitra. “Interactive images: cuboid proxies for smart image manipulation”. In: ACM Transactions on Graphics 31.4 (2012), 99:1–99:11. [7] K. Karsch, V. Hedau, D. Forsyth, and D. Hoiem. “Rendering synthetic objects into legacy photographs”. In: ACM Transactions on Graphics 30.6 (2011), p. 1. [8] V. Hedau, D. Hoiem, and D. Forsyth. “Recovering the spatial layout of cluttered rooms”. In: IEEE International Conference on Computer Vision. IEEE, Sept. 2009, pp. 1849–1856. [9] A. Levin, A. Rav-Acha, and D. Lischinski. “Spectral matting”. In: IEEE Transactions on Pattern Analysis and Machine Intelligence. Vol. 30. 10. 2008, pp. 1699–1712. [10] P. Debevec. “Rendering Synthetic Objects into Real Scenes : Bridging Traditional and Imagebased Graphics with Global Illumination and High Dynamic Range Photography”. In: Proceedings of ACM SIGGRAPH. 1998, pp. 189–198.

9