Cache coherency is important, but it's not the be-all, end-all of everything.
It is in the general case impossible to arrange for your code to provide maximally-coherency access to all the necessary data for everything that needs some data. You can do it for small, simple projects, but eventuallyas things get complex you start to bump up against hard limits: a cache line is only so big, and only so many independent modules needhaving to read the same set of data in the same way.
Unless you startdo tricks like duplicating data (and in the extreme cases,or compressing it so more fits into the cacheable range), you can't do it. And once you start duplicating data and compressing data you startthat starts to potentially cancel out any benefits you're getting from cachesome of your coherency by doing so... so you're adding complexity for a performance gain that actually comes out in the washgains.
You're generally only going to be able to make a subset of your update processes "perfectly" cache-coherent. You should profile to determine which ones are the most likely to benefit from doing so, based on your game's actual access patterns and distribution of objects, and arrange the data accordingly. Everything else will have to settle for being less-than-ideally cache-coherent, or simply not that coherent at all.
Optimizing for cache coherency is usually more about focusing on the data access patterns, which is often at odds with dogmatic adherence to classification (with both "traditional OO" and "ECS" approaches).
Since position is generally small, it's a reasonable candidate for (effective) duplication: you could consider an approach where the models and particle systems have their own local position, which is an offset from the real position. You can process the arrays of models and arrays of particle systems in that local space without having to pull in a shared position from elsewhere until a later stage.
But at a higher level I'd consider simply not worrying about the cache impact too much here. Yet. Both of the systems you're worried about are render-related, so "all of them in one big array" is perhaps not the best storage mechanism anyway. You're going to want to do a lot of culling and other kinds of visibility rejection on a lot of these, and that results of that rejection may shuffle the interesting set of of objects around quite a bit, unrelated to their original positions in the array.
Similarly you're going to want to group them around efficient use of actual rendering resources (avoiding state switches, sharing resources where possible, et cetera), which also may imply a different data structure than an array. And you note that you're not even sure what you're doing about "textures and shaders" for each model, which suggests you're not feature-complete there.
Certainly there is cause for concern about having to do make refactoring after the fact if you really muck up the design of a system. But I would not let that prevent you from first making the system work, with all the features you want. Once it works and does everything you want, then you can make it fast, because you'll have a better idea of what you requirements and constraints actually are.