This is a very soft and potentially naive question, but I've always wondered about this seemingly common phenomenon where a theorem has some method of proof which makes the statement easy to prove, but where other methods of proof are incredibly difficult.
For example, proving that every vector space has a basis (this may be a bad example). This is almost always done via an existence proof with Zorn's lemma applied to the poset of linearly independent subsets ordered on set inclusion. However, if one were to suppose there exists a vector space $V$ with no basis, it seems (to me) that coming up with a contradiction given so few assumptions would be incredibly challenging.
With that said, I had a few questions:
- Are there any other examples of theorems like this?
- Is this phenomenon simply due to the logical structure of the statements themselves, or is it something deeper? Is this something one can quantize in some way? That is, is there any formal way to study the structure of a statement, and determine which method of proof is ideal, and which is not ideal?
- With (1) in mind, are there ever any efforts to come up with proofs of the same theorem using multiple methods for the sake of interest?