The problem that immediately comes to mind with this type of distribution is trusting that the source your assets use remains uncompromised, because in all likelihood you do not own the CDNs upon which you rely. This is even more critical for financial institutions where security (over speed) is the premium concern. A much more complicated attack could, via DNS poisoning, trick a user into downloading content from a compromised server. Content owners would remain completely oblivious to an attack that injects content indirectly. And no, this type of attack would not be helped by hiding behind TLS.
What can we do?
In order to account for the scenarios I describe here the W3C have defined a validation scheme referred to as Subresource Integrity (SRI), this adds the integrity attribute to both the script and link HTML tags. Here is an example:
The integrity attribute is portmanteau of two values, “sha384”, which describes the hashing algorithm applied to the file (src), followed by the actual calculated hash “Li9vy3DqF…”. You would probably calculate and integrate these values during your build process. It is then up to each browser/user agent to verify those values during the asset retrieval process, and by matching the hashing algorithm applied each browser should be able to produce an identical hash. This provides statistically reliable way of guaranteeing that no changes have occurred in even the smallest way (remember a small change in the hash input radically changed the hash output).
So if the browser does not trust the source (hash does not match) it is not obligated to execute the response and an error is raised, during this error the developer can elect to do nothing or point to a trusted source (one they directly control). I think this is a powerful addition in our ever expanding security tool box.