Hierarchical Bayesian Modeling of Decision-Making Tasks
Fit an array of decision-making tasks with computational models in a hierarchical Bayesian framework. Can perform hierarchical Bayesian analysis of various computational models with a single line of coding. Bolded tasks, followed by their respective models, are itemized below.
2-Armed Bandit (Rescorla-Wagner (delta)) — bandit2arm_delta
4-Armed Bandit with fictive updating + reward/punishment sensitvity (Rescorla-Wagner (delta)) — bandit4arm_4par
4-Armed Bandit with fictive updating + reward/punishment sensitvity + lapse (Rescorla-Wagner (delta)) — bandit4arm_lapse
Kalman filter — bandit4arm2_kalman_filter
Cumulative Model — cgt_cm
Drift Diffusion Model — choiceRT_ddm
Drift Diffusion Model for a single subject — choiceRT_ddm_single
Linear Ballistic Accumulator (LBA) model — choiceRT_lba
Linear Ballistic Accumulator (LBA) model for a single subject — choiceRT_lba_single
Exponential model — cra_exp
Linear model — cra_linear
probability weight function — dbdm_prob_weight
Constant Sensitivity — dd_cs
Constant Sensitivity for a single subject — dd_cs_single
Exponential — dd_exp
Hyperbolic — dd_hyperbolic
Hyperbolic for a single subject — dd_hyperbolic_single
RW + Noise — gng_m1
RW + Noise + Bias — gng_m2
RW + Noise + Bias + Pavlovian Bias — gng_m3
RW(modified) + Noise + Bias + Pavlovian Bias — gng_m4
Outcome-Representation Learning — igt_orl
Prospect Valence Learning-DecayRI — igt_pvl_decay
Prospect Valence Learning-Delta — igt_pvl_delta
Value-Plus_Perseverance — igt_vpp
OCU model — peer_ocu
Experience-Weighted Attraction — prl_ewa
Fictitious Update — prl_fictitious
Fictitious Update w/o alpha (indecision point) — prl_fictitious_woa
Fictitious Update and multiple blocks per subject — prl_fictitious_multipleB
Reward-Punishment — prl_rp
Reward-Punishment and multiple blocks per subject — prl_rp_multipleB
Fictitious Update with separate learning for Reward-Punishment — prl_fictitious_rp
Fictitious Update with separate learning for Reward-Punishment w/o alpha (indecision point) — prl_fictitious_rp_woa
Q-learning with two learning rates — pst_gainloss_Q
Prospect Theory (PT) — ra_prospect
PT without a loss aversion parameter — ra_noLA
PT without a risk aversion parameter — ra_noRA
Happiness model — rdt_happiness
Full model (7 parameters) — ts_par7
6 parameter model (without eligibility trace, lambda) — ts_par6
4 parameter model — ts_par4
Ideal Bayesian Observer — ug_bayes
Rescorla-Wagner (delta) — ug_delta
Woo-Young Ahn wahn55@snu.ac.kr
Nathaniel Haines haines.175@osu.edu
Lei Zhang bnuzhanglei2008@gmail.com
Please cite as: Ahn, W.-Y., Haines, N., & Zhang, L. (2017). Revealing neuro-computational mechanisms of reinforcement learning and decision-making with the hBayesDM package. Computational Psychiatry. 1, 24-57. https://doi.org/10.1162/CPSY_a_00002
For tutorials and further readings, visit : http://rpubs.com/CCSL/hBayesDM.
Please choose more modern alternatives, such as Google Chrome or Mozilla Firefox.