This paper considers the problem of secure parameter estimation when an estimation algorithm is prone to causative attacks. Causative attacks, in general, target decision-making algorithms (e.g., inference or learning algorithm) to alter their decisions in specific scenarios (e.g., distort parameter estimates for specific ranges of the parameter of interest). Such attacks influence the decisions via tampering with the mechanisms through which an algorithm acquires the statistical model of the population about which it aims to form a decision. Such attacks are viable, for instance, by contaminating the historical or training data, or by compromising an expert who provides the statistical model. In the presence of causative attacks, inference algorithms operate under a distorted statistical model for the data samples. This paper introduces a notion of secure parameter estimation and formalizes a framework under which secure estimation can be formulated and analyzed. The central premise underlying the secure estimation framework is that forming secure estimates introduces a new dimension to the estimation objective, pertaining to detecting attacks and isolating the true model. Since detection and isolation decisions themselves are imperfect, their inclusion induces an inherent coupling between the desired secure estimation objective and the auxiliary detection and isolation decisions that need to be formed in conjunction with the estimates. This paper establishes the fundamental interplay among these decisions , and characterizes the general decision rules in closed-forms for any desired estimation cost function. Furthermore, to circumvent the computational complexity associated with growing parameter dimension or attack complexity, a scalable estimation algorithm and is provided, which is shown to enjoy certain optimality guarantees. Finally, the theory developed is applied to secure parameter estimation in sensor networks.