Bibliography
Abadi, M., Agarwal, A., Barham, P., Brevdo, E., Chen, Z., Citro, C.,
Corrado, G. S., Davis, A., Dean, J., Devin, M., and others (2016),
“Tensorflow: Large-scale machine learning on heterogeneous
distributed systems,” arXiv preprint arXiv:1603.04467.
Allaire, J. J., Teague, C., Scheidegger, C., Xie, Y., and Dervieux, C.
(2024), “Quarto.” https://doi.org/10.5281/zenodo.5960048.
Belsley, D. A., Kuh, E., and Welsch, R. E. (1980), Regression
diagnostics: Identifying influential data and sources of
collinearity, John Wiley & Sons.
Box, G. E. (1976), “Science and statistics,” Journal of
the American Statistical Association, Taylor & Francis, 71,
791–799.
Breusch, T. S., and Pagan, A. R. (1979), “A simple test for
heteroscedasticity and random coefficient variation,”
Econometrica: Journal of the Econometric Society, JSTOR,
1287–1294.
Brunetti, A., Buongiorno, D., Trotta, G. F., and Bevilacqua, V. (2018),
“Computer vision and deep learning techniques for pedestrian
detection and tracking: A survey,” Neurocomputing,
Elsevier, 300, 17–33.
Buja, A., Cook, D., Hofmann, H., Lawrence, M., Lee, E.-K., Swayne, D.
F., and Wickham, H. (2009a), “Statistical inference for
exploratory data analysis and model diagnostics,”
Philosophical Transactions of the Royal Society A: Mathematical,
Physical and Engineering Sciences, The Royal Society Publishing,
367, 4361–4383.
Buja, A., Cook, D., Hofmann, H., Lawrence, M., Lee, E.-K., Swayne, D.
F., and Wickham, H. (2009b), “Statistical inference for
exploratory data analysis and model diagnostics,”
Philosophical Transactions of the Royal Society A: Mathematical,
Physical and Engineering Sciences, 367, 4361–4383. https://doi.org/10.1098/rsta.2009.0120.
Chang, W., and Borges Ribeiro, B. (2021), Shinydashboard:
Create dashboards with ’shiny’.
Chang, W., Cheng, J., Allaire, J., Sievert, C., Schloerke, B., Xie, Y.,
Allen, J., McPherson, J., Dipert, A., and Borges, B. (2022), Shiny: Web application
framework for r.
Chen, Y., Su, S., and Yang, H. (2020), “Convolutional neural
network analysis of recurrence plots for anomaly detection,”
International Journal of Bifurcation and Chaos, World
Scientific, 30, 2050002.
Cheng, J., Sievert, C., Schloerke, B., Chang, W., Xie, Y., and Allen, J.
(2024), Htmltools: Tools for
HTML.
Chollet, F. (2021), Deep learning with python, Simon; Schuster.
Chollet, F., and others (2015), “Keras,” https://keras.io.
Chopra, S., Hadsell, R., and LeCun, Y. (2005), “Learning a
similarity metric discriminatively, with application to face
verification,” in 2005 IEEE computer society conference on
computer vision and pattern recognition (CVPR’05), IEEE, pp.
539–546.
Chowdhury, N. R., Cook, D., Hofmann, H., and Majumder, M. (2018),
“Measuring lineup difficulty by matching distance metrics with
subject choices in crowd-sourced data,” Journal of
Computational and Graphical Statistics, Taylor & Francis, 27,
132–145.
Chu, H., Liao, X., Dong, P., Chen, Z., Zhao, X., and Zou, J. (2019),
“An automatic classification method of well testing plot based on
convolutional neural network (CNN),” Energies, MDPI, 12,
2846.
Clark, A., and others (2015), “Pillow (pil fork)
documentation,” readthedocs.
Cleveland, W. S., and McGill, R. (1984), “Graphical perception:
Theory, experimentation, and application to the development of graphical
methods,” Journal of the American Statistical
Association, Taylor & Francis, 79, 531–554.
Cook, R. D., and Weisberg, S. (1982), Residuals and influence in
regression, New York: Chapman; Hall.
Cook, R. D., and Weisberg, S. (1999), Applied regression including
computing and graphics, John Wiley & Sons.
Davies, R., Locke, S., and D’Agostino McGowan, L. (2022), datasauRus:
Datasets from the datasaurus dozen.
Davison, A. C., and Hinkley, D. V. (1997), Bootstrap methods and
their application, Cambridge university press.
De Leeuw, J. R. (2015), “jsPsych: A JavaScript library for
creating behavioral experiments in a web browser,” Behavior
Research Methods, Springer, 47, 1–12.
Draper, N. R., and Smith, H. (1998), Applied regression
analysis, John Wiley & Sons.
Dunn, P. K., and Smyth, G. K. (1996), “Randomized quantile
residuals,” Journal of Computational and graphical
statistics, Taylor & Francis, 5, 236–244.
Emami, S., and Suciu, V. P. (2012), “Facial recognition using
OpenCV,” Journal of Mobile, Embedded and Distributed
Systems, 4, 38–43.
Farrar, T. J. (2020), Skedastic: Heteroskedasticity diagnostics for
linear regression models, Bellville, South Africa: University of
the Western Cape.
Fieberg, J., Freeman, S., and Signer, J. (2024), “Using lineups to
evaluate goodness of fit of animal movement models,” Methods
in Ecology and Evolution, Wiley Online Library.
Frisch, R., and Waugh, F. V. (1933), “Partial time regressions as
compared with individual trends,” Econometrica: Journal of
the Econometric Society, JSTOR, 387–401.
Fukushima, K., and Miyake, S. (1982), “Neocognitron: A new
algorithm for pattern recognition tolerant of deformations and shifts in
position,” Pattern recognition, Elsevier, 15, 455–469.
Gautier, L. (2024), Python
interface to the r language (embedded r).
Gebhardt, A., Bivand, R., and Sinclair, D. (2023), Interp: Interpolation
methods.
Gelman, A., Carlin, J. B., Stern, H. S., Dunson, D. B., Vehtari, A., and
Rubin, D. B. (2013), Bayesian data analysis (3rd ed.),
Chapman and Hall/CRC.
Goode, K., and Rey, K. (2019), ggResidpanel:
Panels and interactive versions of diagnostic plots using
’ggplot2’.
Goodfellow, I., Bengio, Y., and Courville, A. (2016), Deep
learning, MIT press.
Goscinski, W. J., McIntosh, P., Felzmann, U., Maksimenko, A., Hall, C.
J., Gureyev, T., Thompson, D., Janke, A., Galloway, G., Killeen, N. E.,
and others (2014), “The multi-modal australian ScienceS imaging
and visualization environment (MASSIVE) high performance computing
infrastructure: Applications in neuroscience and neuroinformatics
research,” Frontiers in Neuroinformatics, Frontiers
Media SA, 8, 30.
Grinberg, M. (2018), Flask web development: Developing web
applications with python, " O’Reilly Media, Inc.".
Hailesilassie, T. (2019), “Financial market prediction using
recurrence plot and convolutional neural network,” Preprints.
Harris, C. R., Millman, K. J., Van Der Walt, S. J., Gommers, R.,
Virtanen, P., Cournapeau, D., Wieser, E., Taylor, J., Berg, S., Smith,
N. J., and others (2020), “Array programming with NumPy,”
Nature, Nature Publishing Group UK London, 585, 357–362.
Harrison Jr, D., and Rubinfeld, D. L. (1978), “Hedonic housing
prices and the demand for clean air,” Journal of
environmental economics and management, Elsevier, 5, 81–102.
Hartig, F. (2022), DHARMa: Residual
diagnostics for hierarchical (multi-level / mixed) regression
models.
Hastie, T. J. (2017), “Generalized additive models,” in
Statistical models in s, Routledge, pp. 249–307.
Hatami, N., Gavet, Y., and Debayle, J. (2018a), “Classification of time-series images using deep
convolutional neural networks,” in Tenth international
conference on machine vision (ICMV 2017), eds. A. Verikas, P.
Radeva, D. Nikolaev, and J. Zhou, International Society for Optics;
Photonics; SPIE, p. 106960Y. https://doi.org/10.1117/12.2309486.
Hatami, N., Gavet, Y., and Debayle, J. (2018b), “Classification of
time-series images using deep convolutional neural networks,” in
Tenth international conference on machine vision (ICMV 2017),
SPIE, pp. 242–249.
He, K., Zhang, X., Ren, S., and Sun, J. (2016), “Deep residual
learning for image recognition,” in Proceedings of the IEEE
conference on computer vision and pattern recognition, pp. 770–778.
Hebbali, A. (2024), Olsrr: Tools for
building OLS regression models.
Hermite, M. (1864), Sur un nouveau développement en
série des fonctions, Imprimerie de Gauthier-Villars.
Hester, J., and Bryan, J. (2022), Glue: Interpreted string
literals.
Hofmann, H., VanderPlas, S., and Ge, Y. (2022), Ggpcp: Parallel
coordinate plots in the ’ggplot2’ framework.
Hofmann, H., Wickham, H., and Kafadar, K. (2017), “Value plots:
Boxplots for large data,” Journal of Computational and
Graphical Statistics, Taylor & Francis, 26, 469–477.
Hornik, K. (2012), “The comprehensive r archive network,”
Wiley interdisciplinary reviews: Computational statistics,
Wiley Online Library, 4, 394–398.
Huang, G., Liu, Z., Van Der Maaten, L., and Weinberger, K. Q. (2017),
“Densely connected convolutional networks,” in
Proceedings of the IEEE conference on computer vision and pattern
recognition, pp. 4700–4708.
Hyndman, R. J., and Fan, Y. (1996), “Sample quantiles in
statistical packages,” The American Statistician, Taylor
& Francis, 50, 361–365.
Jamshidian, M., Jennrich, R. I., and Liu, W. (2007), “A study of
partial f tests for multiple linear regression models,”
Computational Statistics & Data Analysis, Elsevier, 51,
6269–6284.
Jarque, C. M., and Bera, A. K. (1980), “Efficient tests for
normality, homoscedasticity and serial independence of regression
residuals,” Economics Letters, Elsevier, 6, 255–259.
Jeppson, H., Hofmann, H., and Cook, D. (2021), Ggmosaic: Mosaic
plots in the ’ggplot2’ framework.
Johnson, P. E. (2022), Rockchalk:
Regression estimation and presentation.
Kahle, D. (2013), “Mpoly: Multivariate polynomials in
R,” The R Journal, 5, 162–170.
Kahneman, D. (2011), Thinking, fast and slow, macmillan.
Kimball, A. (1957), “Errors of the third kind in statistical
consulting,” Journal of the American Statistical
Association, Taylor & Francis, 52, 133–142.
Kingma, D. P., and Ba, J. (2014), “Adam: A method for stochastic
optimization,” arXiv preprint arXiv:1412.6980.
Kirk, R. E. (1996), “Practical significance: A concept whose time
has come,” Educational and psychological measurement,
Sage Publications Sage CA: Thousand Oaks, CA, 56, 746–759.
Krishnan, G., and Hofmann, H. (2021), “Hierarchical decision
ensembles-an inferential framework for uncertain human-AI collaboration
in forensic examinations,” arXiv preprint
arXiv:2111.01131.
Kuhn, M., Vaughan, D., and Hvitfeldt, E. (2024), Yardstick: Tidy
characterizations of model performance.
Kullback, S., and Leibler, R. A. (1951), “On information and
sufficiency,” The Annals of Mathematical Statistics,
JSTOR, 22, 79–86.
Langsrud, Ø. (2005), “Rotation tests,” Statistics and
computing, Springer, 15, 53–60.
Laplace, P.-S. (1820), Théorie analytique des
probabilités, Courcier.
Lee, H., and Chen, Y.-P. P. (2015), “Image based computer aided
diagnosis system for cancer detection,” Expert Systems with
Applications, Elsevier, 42, 5356–5365.
Li, W. (2024), “Bandicoot:
Light-weight python-like object-oriented system.”
Li, W., Cook, D., Tanaka, E., and VanderPlas, S. (2024), “A plot
is worth a thousand tests: Assessing residual diagnostics with the
lineup protocol,” Journal of Computational and Graphical
Statistics, Taylor & Francis, 1–19.
Long, J. A. (2022), Jtools: Analysis and
presentation of social scientific data.
Loy, A. (2021), “Bringing visual inference to the
classroom,” Journal of Statistics and Data Science
Education, Taylor & Francis, 29, 171–182.
Loy, A., Follett, L., and Hofmann, H. (2016), “Variations of q–q
plots: The power of our eyes!” The American
Statistician, Taylor & Francis, 70, 202–214.
Loy, A., and Hofmann, H. (2013), “Diagnostic tools for
hierarchical linear models,” Wiley Interdisciplinary Reviews:
Computational Statistics, Wiley Online Library, 5, 48–61.
Loy, A., and Hofmann, H. (2014), “HLMdiag: A suite of diagnostics
for hierarchical linear models in r,” Journal of Statistical
Software, 56, 1–28.
Loy, A., and Hofmann, H. (2015), “Are you normal? The problem of
confounded residual structures in hierarchical linear models,”
Journal of Computational and Graphical Statistics, Taylor &
Francis, 24, 1191–1209.
Majumder, M., Hofmann, H., and Cook, D. (2013a), “Validation of
visual statistical inference, applied to linear models,”
Journal of the American Statistical Association, Taylor &
Francis, 108, 942–956.
Majumder, M., Hofmann, H., and Cook, D. (2013b), “Validation of
visual statistical inference, applied to linear models,”
Journal of the American Statistical Association, 108, 942–956.
https://doi.org/10.1080/01621459.2013.808157.
Mason, H., Lee, S., Laa, U., and Cook, D. (2022), Cassowaryr: Compute
scagnostics on pairs of numeric variables in a data set.
Montgomery, D. C., Peck, E. A., and Vining, G. G. (1982),
Introduction to linear regression analysis, John Wiley &
Sons.
Moon, K.-W. (2020), Webr: Data and functions
for web-based analysis.
Müller, K. (2020), Here: A simpler way to
find your files.
Nair, V., and Hinton, G. E. (2010), “Rectified linear units
improve restricted boltzmann machines,” in Proceedings of the
27th international conference on machine learning (ICML-10), pp.
807–814.
Nowosad, J. (2018), ’CARTOColors’
palettes.
O’Malley, T., Bursztein, E., Long, J., Chollet, F., Jin, H., Invernizzi,
L., and others (2019), “Keras Tuner,” https://github.com/keras-team/keras-tuner.
Ojeda, S. A. A., Solano, G. A., and Peramo, E. C. (2020),
“Multivariate time series imaging for short-term precipitation
forecasting using convolutional neural networks,” in 2020
international conference on artificial intelligence in information and
communication (ICAIIC), IEEE, pp. 33–38.
Olvera Astivia, O. L., Gadermann, A., and Guhn, M. (2019), “The
relationship between statistical power and predictor distribution in
multilevel logistic regression: A simulation-based approach,”
BMC Medical Research Methodology, BioMed Central, 19, 1–20.
Ooms, J. (2023), Magick: Advanced
graphics and image-processing in r.
Palan, S., and Schitter, C. (2018), “Prolific. Ac—a subject pool
for online experiments,” Journal of Behavioral and
Experimental Finance, Elsevier, 17, 22–27.
Pedersen, T. L. (2022), Patchwork: The
composer of plots.
PythonAnywhere LLP (2023), “PythonAnywhere.”
R Core Team (2022), R: A
language and environment for statistical computing, Vienna,
Austria: R Foundation for Statistical Computing.
Ramsey, J. B. (1969), “Tests for specification errors in classical
linear least-squares regression analysis,” Journal of the
Royal Statistical Society: Series B (Methodological), Wiley Online
Library, 31, 350–371.
Rawat, W., and Wang, Z. (2017), “Deep convolutional neural
networks for image classification: A comprehensive review,”
Neural computation, MIT Press, 29, 2352–2449.
Reinhart, A. (2024), Regressinator:
Simulate and diagnose (generalized) linear models.
Rowlingson, B., and Diggle, P. (2023), Splancs: Spatial and
space-time point pattern analysis.
Roy Chowdhury, N., Cook, D., Hofmann, H., Majumder, M., Lee, E.-K., and
Toth, A. L. (2015), “Using visual statistical inference to better
understand random class separations in high dimension, low sample size
data,” Computational Statistics, Springer, 30, 293–316.
https://doi.org/10.1007/s00180-014-0534-x.
Sali, A., and Attali, D. (2020), Shinycssloaders:
Add loading animations to a ’shiny’ output while it’s
recalculating.
Savvides, R., Henelius, A., Oikarinen, E., and Puolamäki, K. (2023),
“Visual data exploration as a statistical testing procedure:
Within-view and between-view multiple comparisons,” IEEE
Transactions on Visualization and Computer Graphics, 29, 3937–3948.
https://doi.org/10.1109/TVCG.2022.3175532.
Series, B. (2011), “Studio encoding parameters of digital
television for standard 4: 3 and wide-screen 16: 9 aspect
ratios,” International Telecommunication Union,
Radiocommunication Sector.
Shapiro, S. S., and Wilk, M. B. (1965), “An analysis of variance
test for normality (complete samples),” Biometrika,
JSTOR, 52, 591–611.
Silverman, B. W. (2018), Density estimation for statistics and data
analysis, Routledge.
Silvey, S. D. (1959), “The lagrangian multiplier test,”
The Annals of Mathematical Statistics, JSTOR, 30, 389–407.
Simonyan, K., and Zisserman, A. (2014), “Very deep convolutional
networks for large-scale image recognition,” arXiv preprint
arXiv:1409.1556.
Singh, K., Gupta, G., Vig, L., Shroff, G., and Agarwal, P. (2017),
“Deep convolutional neural networks for pairwise
causality,” arXiv preprint arXiv:1701.00597.
Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., and
Salakhutdinov, R. (2014), “Dropout: A simple way to prevent neural
networks from overfitting,” The journal of machine learning
research, JMLR. org, 15, 1929–1958.
Tukey, J. W., and Tukey, P. A. (1985), “Computer graphics and
exploratory data analysis: An introduction,” in Proceedings
of the sixth annual conference and exposition: Computer graphics,
pp. 773–785.
Ushey, K., Allaire, J., and Tang, Y. (2024), Reticulate:
Interface to ’python’.
VanderPlas, S., and Hofmann, H. (2016), “Spatial reasoning and
data displays,” IEEE Transactions on Visualization and
Computer Graphics, 22, 459–468. https://doi.org/10.1109/TVCG.2015.2469125.
VanderPlas, S., Röttger, C., Cook, D., and Hofmann, H. (2021),
“Statistical significance calculations for scenarios in visual
inference,” Stat, Wiley Online Library, 10, e337.
Vo, N. N., and Hays, J. (2016), “Localizing and orienting street
views using overhead imagery,” in Computer vision–ECCV 2016:
14th european conference, amsterdam, the netherlands, october 11–14,
2016, proceedings, part i 14, Springer, pp. 494–509.
Wang, Z., Bovik, A. C., Sheikh, H. R., and Simoncelli, E. P. (2004),
“Image quality assessment: From error visibility to structural
similarity,” IEEE transactions on image processing,
IEEE, 13, 600–612.
Warton, D. I. (2023), “Global simulation envelopes for diagnostic
plots in regression models,” The American Statistician,
77, 425–431. https://doi.org/10.1080/00031305.2022.2139294.
White, H. (1980), “A heteroskedasticity-consistent covariance
matrix estimator and a direct test for heteroskedasticity,”
Econometrica: Journal of the Econometric Society, JSTOR,
817–838.
Wickham, H. (2016), ggplot2:
Elegant graphics for data analysis, Springer-Verlag New York.
Wickham, H., Averick, M., Bryan, J., Chang, W., McGowan, L. D.,
François, R., Grolemund, G., Hayes, A., Henry, L., Hester, J., Kuhn, M.,
Pedersen, T. L., Miller, E., Bache, S. M., Müller, K., Ooms, J.,
Robinson, D., Seidel, D. P., Spinu, V., Takahashi, K., Vaughan, D.,
Wilke, C., Woo, K., and Yutani, H. (2019), “Welcome to the tidyverse,” Journal of Open Source
Software, 4, 1686. https://doi.org/10.21105/joss.01686.
Wickham, H., Chowdhury, N. R., Cook, D., and Hofmann, H. (2020), Nullabor: Tools for
graphical inference.
Wickham, H., Cook, D., Hofmann, H., and Buja, A. (2010),
“Graphical inference for infovis,” IEEE Transactions on
Visualization and Computer Graphics, 16, 973–979. https://doi.org/10.1109/TVCG.2010.161.
Widen, H. M., Elsner, J. B., Pau, S., and Uejio, C. K. (2016),
“Graphical inference in geographical research,”
Geographical Analysis, Wiley Online Library, 48, 115–131.
Wilkinson, L., Anand, A., and Grossman, R. (2005),
“Graph-theoretic scagnostics,” in Information
visualization, IEEE symposium on, IEEE Computer Society, pp. 21–21.
Xie, Y. (2014), “Knitr: A
comprehensive tool for reproducible research in
R,” in Implementing reproducible
computational research, eds. V. Stodden, F. Leisch, and R. D. Peng,
Chapman; Hall/CRC.
Xie, Y., Cheng, J., and Tan, X. (2024), DT: A wrapper of the
JavaScript library ’DataTables’.
Yin, T., Majumder, M., Roy Chowdhury, N., Cook, D., Shoemaker, R., and
Graham, M. (2013), “Visual mining methods for RNA-seq data: Data
structure, dispersion estimation and significance testing,”
Journal of Data Mining in Genomics and Proteomics, 4. https://doi.org/10.4172/2153-0602.1000139.
Zakai, A. (2011), “Emscripten: An LLVM-to-JavaScript
compiler,” in Proceedings of the ACM international conference
companion on object oriented programming systems languages and
applications companion, pp. 301–312.
Zeileis, A., and Hothorn, T. (2002), “Diagnostic checking in
regression relationships,” R News, 2, 7–10.
Zhang, Y., Hou, Y., Zhou, S., and Ouyang, K. (2020), “Encoding
time series as multi-scale signed recurrence plots for classification
using fully convolutional networks,” Sensors, MDPI, 20,
3818.
Zhang, Z., and Yuan, K.-H. (2018), Practical statistical power
analysis using webpower and r, Isdsa Press.
Zhu, H. (2021), kableExtra:
Construct complex table with kable and pipe syntax.