trialr is my cookbook of Bayesian clinical trial designs implemented in Stan. It is small right now but I plan to grow it over time. I presented trialr at the International Society for Clinical Biostatistics (ISCB) Conference last week. I hope that by cataloguing trial designs I will increase the usage of both Stan and Bayesian stats in general in clinical trials.

I love Stan and I make frequent use of rstantools, so thanks for publishing that.

Kristian

]]>Well, anyway, I love reading model descriptions. And I love the ideas behind bootstrapping or robustifying or whatever you call adding data so the process might, maybe should pick something other than your points as mapped. I know you’ve talked about parameterization a lot. It is really interesting: how do you know your assumptive space is actually set out properly, that the points you’re using are not biased – like a mentalist’s illusions can appear to be out of the blue when there is absolute clarity not visible to you (another reduction of the unknown statement) – or that you’ve biased the algorithm. We always bias the algorithm because any selection is a choice but the closer one gets to determining whether a specific effect is real the more important that bias becomes, which is true not only for small effects but large ones as well. Somewhat off track but interesting: for very large effect, consider the difference between predators and non-predators; there is a binary type switch which identifies ‘that which is not growing out of the earth’ up through things Iike ‘those that move around unattached’ to ‘those not of your kin’ so this group of equations generates the answer ‘eat that’. You can point out that eating stuff goes down to bacterial levels and you can even say molecules eat just as black holes eat, but that’s making the point that the binary relation of ‘eat’ and ‘not eat’ extends through creation. Note that mathematically this is merely a statement that the set of equations which generates ‘eat’ must be accompanied by a set of equations that generates ‘not eat’ because there must a context in which you’re counting at least one or the other. That result can be achieved by diagonalizing too, but again it spirals into much more complicated stuff, including how contexts count in coordinate planes across an implied Riemann-style zeta axis in which you can see the complexity develop as you count ‘distance’, all the way to the stuff that really interests me these days, which is the issue of directionality in layered contexts within layered context. This involves a lot of rotations that involve tricky perspectives, as for example the distortions that occur as you hold a perspective line or as you generate one and how that idealizes at each implied counting point – meaning scale determinability – and then how you can compare these forms with idealizations so you can rip stuff apart and calculate and comprehend within rational schema like graphs and algorithmic steps. I find manipulating the idealizations extremely taxing even using little pictures to keep track; the complexity is like figuring j potential without knowing the answer is what you observe as j.

Enough for today. I’m trying to state a simple concept today, something like vitality – which I’m arguing in my head is a rational proxy for energy contained so I can use me as the model for the groups of equations that generate or don’t generate my vitality level (as I appreciate it, as I appreciate all the many layered orders of what contributes to my working definition of vitality). The point is to see if I can do some pure Bayesian thinking: can I create a prior which generates a better posterior which then creates a better prior, more or less, for the next iterations when the first prior in that chain is generated by a model separate from the ‘energy contained’ label vitality. That is, I can state this as evolution being a count of results measured in Darwin’s context, meaning simply that the ‘target’ appears within a model that has no target but which generates targets at each layer as these layers reduce to layers that count within contexts. These can be immaterial, such as the flutter of a butterfly, or material, like the need for sustenance and they rise to existential levels, whether that’s pure relative context – as in, you mean nothing to me! – or a momentary statement or expression of existence – like a pulsar or supernova across a million light years, meaning space-time ‘away’ or ‘near’ – or the processes that matter in that context – like ‘Mongo just pawn in great game of life’. I can state this as a field, in various existence and process representations, and have lots of examples of what it explains but I get really stuck on a bunch of applications. I’m having trouble visualizing certain spatial transformations, such as ‘when an x-y plane generates, how many model layers are visible in that plane in the ideal’. I know how that converts into a value in a count that shifts across the field, meaning across the unknown, meaning for someone who’s really into deep mathematics, the gap over any count in the real line, but I have trouble with the overlapping squares and circles in the ideal drawings, let alone with when the field consists of multiple iterations of these layered fields because it’s super hard to see into the gaps and valleys – into the near field – while still seeing the terrain that encompasses the larger field. This is true even when I assume the larger field, which extends through diagonalized incompleteness, has the exact same contour and that – almost wrote THAT – is why I’m working on directionality and in particular it’s relationship to inherent attractive and repulsive model forces. It’s the most fun I’ve had in my life, which has been pretty long: I’m actually considering the ‘model values’ of relative groups in an unspecified field with known, describable processes, all fully idealized and labeled for manipulation, where ‘model values’ literally takes on meaning. In other words, an investigation of how meaning occurs, from the iota to the fully comprehensive and how these count as x-y coordinate planes along a z axis, and – much cooler but much harder – how each coordinate plane’s count is treated as made up of the counts of these coordinate planes. Out of simple depictions using an idealized axis in which z is not visible, you see how meaning occurs when there was none and how it exists over time and how it moves in relation to other ‘meanings’ and how we can talk about ‘when’. Where I’m at right now is difficulty comprehending – and I mean that in the set theoretical sense I have to create a comprehensible set – directional valuations at the peripheries of any given context into that context’s unknown. I’m very hard on myself and find it difficult to accept my conclusions – again, so what am I not accepting?’ and ‘what am I accepting that I don’t see?’, which you recognize is the same form of statement methodology as I’ve been using – though I can fully state them. You can see, I hope, why working with a reduction concept is just a way of declaring a variable and running it through various declared functions to see which simulations generate a better result that is then taken as the new variable – seen also as a vector of matrices or as an ordered array, etc. – and how this process generates generates separate complex iterations that can be compared in a number of ways including by a set composed of ‘best’ (which because it is a set of fields is itself relative to its own existence, meaning even at the level of ‘set of best’ then relative change will affect ordering and existence in the counting of ‘set of best’. Same field of fields stuff. Same complexities. I know the answer but saying it at that level of expression is tricky.

]]>