Induction proofs inequalities pdf free download
Chebyshev's inequality is a statement about non-increasing sequence i. Calculation :. In the above question, we have only one non-increasing sequence of real numbers a 1 , a 2 , a 3 , So, by using Chebyshev's inequality,. For two non-negative numbers , their arithmetic mean AM is always greater than or equals to their geometric mean GM. Hence we can apply the above condition,. Start Learning.
Each self-contained chapter in this section includes the necessary definitions, theory, and notation and covers a range of theorems and problems, from fundamental to very specialized. The final part presents either solutions or hints to the exercises. Slightly longer than what is found in most texts, these solutions provide complete details for every step of the problem-solving process. In the first part of the book, the author discuss. The Handbook provides an essential reference work for students and researchers in applied mathematics, engineering, and physics.
The most important formulas, functions, and results used in applications of mathematics are covered. New material includes proof by mathematical induction, properties of spherical Bessel functions, more detailed results on orthogonal polynomials, interpolation and Pade approximation, and a section on the z-transform. The original thumb-tab indexing has been retained, as it provides an easy reference system that supplements the contents listing and extensive index.
The information is organized logically instead of alphabetically for better comprehension and quick, convenient access. The book contains extensive cross-referencing between the mathematical and physical sections. Fundamental concepts, theorems, and laws are demonstrated through numerous practical examples and tasks to help build problem-solving skills. It is a tool for students, scientists, engineers, students of many disciplines, teachers, professionals, writers and also for a general reader with an interest in mathematics and in science.
It provides a wide range of mathematical concepts, definitions, propositions, theorems, proofs, examples, and numerous illustrations. The difficulty level can vary depending on chapters, and sustained attention will be required for some. The structure and list of Parts are quite classical: I.
Foundations of Mathematics, II. Algebra, III. Number Theory, IV. Geometry, V. Analytic Geometry, VI. Topology, VII. Analysis, IX. Category Theory, X. Probability and Statistics, XI. Applied Mathematics. Appendices provide useful lists of symbols and tables for ready reference. The blueprint for twentieth century mathematical thought, thanks to Hilbert and Bourbaki, is the axiomatic development of the subject. As a result, logic plays a central conceptual role. At the same time, mathematical logic has grown into one of the most recondite areas of mathematics.
Most of modern logic is inaccessible to all but the special ist. Yet there is a need for many mathematical scientists-not just those engaged in mathematical research-to become conversant with the key ideas of logic. The Handbook of Mathematical Logic, edited by Jon Bar wise, is in point of fact a handbook written by logicians for other mathe maticians.
It was, at the time of its writing, encyclopedic, authoritative, and up-to-the-moment. But it was, and remains, a comprehensive and authoritative book for the cognoscenti. But it is overwhelming for the casual user. There is need for a book that introduces important logic terminology and concepts to the working mathematical scientist who has only a passing acquaintance with logic.
The material is presented so that key information can be located and used quickly and easily. Each chapter includes a glossary. Individual topics are covered in sections and subsections within chapters, each of which is organized into clearly identifiable parts: definitions, facts, and examples.
Examples are provided to illustrate some of the key definitions, facts, and algorithms. Some curious and entertaining facts and puzzles are also included. Readers will also find an extensive collection of biographies. However, whenever the tool gets stuck, it can be quite painful to get around the problem. Due to the small amount and low level of control on tactics, direct proofs can be very hard. In summary, few tactics have their advantages: they are easy to use and get used, to remember, and to explain.
Yet, deep understanding of each tactic behaviour is fundamental for fine grained control of the prover. The GUI [Saa99b] is written in python and it provides a rather smooth experience for theorem proving in Z enabling the user to prove quite complex theorems with few mouse clicks.
This interface encodes the Z specification in a XML-like file that can be browsed and type checked easily. During the performance of proofs, theorems available for automated application also also appears as pop-up menus giving some hints on possible paths to follow or avoid during proofs.
No information about internal message issued during proofs is given tough. In our experience, the free version of the GUI has some shortcomings: proprietary clipboard support, lack of an LATEXexporting, inability to save the work done after an error or abortion, and inability to prove all paragraphs directly.
The textual interface allows both stand alone execution, as well as integration with emacs and direct API calls via socket connections. On emacs, it provides key-maps of commands allowing to user to perform both proofs, inspection, and maintenance commands whilst editing the related documents. Furthermore, the textual interface also provides two modes of operation: interactive and batch.
In interactive mode, one can read, type check, and prove theorems and declarations from different files.
All conclusions are kept in memory and can be accessed by a series of maintenance commands [MS97, Chapter 4]. In batch mode, a script with proof and maintenance steps is loaded, while proofs are being carried out on the background.
Once completed with success i. Whenever the source files are modified, these verification files become outdated. Inclusion of files through z-sections [Toy00] for declarations, and z-section-proofs for proof scripts is available whenever recursive inclusion is avoided. This separation of declaration and proof sections enables not only a great degree of modularity, but also a considerable amount of control and partition of responsibilities.
In practice this not always happen. Often, transformations on the goals are made such that instead of expanding the goal, a contradiction at the assumptions is introduced. Therefore, the real goal to prove becomes the contradictory assumption introduced. Transforming the contradictory assumption to false is the usual path to finish the proof. Generally the distributive law of implication over disjunction is used to transformed such goals into the appropriate form for the tool.
Nevertheless, the rationale of abilities values must be carefully chosen. This is important because the prover can take the wrong turn while transforming goals; this can cause struggle with simple proofs. Abilities are used to give some control to the user on the application of automatic transformations. The rationale of abilities can represent the difference between finishing a proof, or reaching a dead end within the same theorem.
Different usages plays different roles within the available tactics. There are three usages and two abilities, they are given with roman fonts. Declarations can be either enabled or disabled, where enabled is assumed as default when no ability is given. On one hand, enabled declarations allows the prover to automatically apply the related definition or theorem whilst performing a tactic. On the other hand, disabled declarations allows the user to carefully select where and when to apply local or global transformations.
Although some guidelines do exist for the proper selection of the different usages, it is difficult to predict the appropriate ability values beforehand. Experience with the theory and the prover is the answer. When deciding the abilities, one should focus the theory users, and the sort of theorems they are likely to prove. Even so, once in a while one might come back to change particular abilities at some points.
However, bare in mind that this has a big chance to interfere on direct proofs; guided proofs are less likely to be affected tough. The available usages are called rules. There are rewriting, assumption, and forward rules. They are used for different sorts of automatic transformations.
Furthermore, there are the normal theorems and axioms detailed next. They affect transformation tactics whenever there is a pattern match on the formulae. When enabled, rewriting rules are applied automatically and can either finish the goal, or generate some side condition type checks. When disabled, they can be applied point-wise by the user both locally or globally on the current theorem.
An important aspect to bare in mind while thinking about how to declare an automation rule is the way to configure the application conditions.
Although being an enabled rewriting rule, it cannot be applied auto- matically or through pop-up menus, but manually. Therefore, automation in this case got compromised by the order of declarations of theorems. Thus, the point of concern is: an wise choice of application conditions while introducing rewriting rules represents an important issue on the level of automation achieved.
Another option would be to introduce both versions. Even tough possible, careful must be taken in order to avoid unpredictable side effects like infinite loops during rewriting. In conclusion, the guideline to choose is to introduce rewriting rules where the side condition is the predicate that might appear more often. This is highly dependent on the sketch of proofs one wishes to perform. In order to keep desirable levels of automation, the bottom line of this situation can be a redesign of the proof plan that is suitable to the conditions of available rewriting rules.
The user can declare axiomatic definitions or theorems as rewriting rules by tagging then with the rule keyword. Nevertheless, some syntactic restrictions apply [MS97, Chapter 3]. The difference between a normal theorem and a rule is that the later is given as a tautology.
Theorems given as tautologies is a clever representation. They explore the predicate calculus reasoning and tautology checking abilities on lower layers of the prover. Basically, they provide machinery to increase the level of automation.
In contrast to the intuitive nature of rewriting rules, these rules are somewhat awkward in their format. This is usually learned through careful observation of the transformations between proof steps.
Assumption rules can only be given as theorems; they are tagged with grule. They are important for automatic discharge of side conditions and type checks generated by the application of other theorems and declarations.
They are particular useful when complex data types are involved. A simple example is given for the proof of a theorem involving sequences. These type trans- formations goes up to the last enabled rewriting rule associated to definitions of the expression being transformed. In this case, the assumption rule was given according to the necessity that shown up during previous proofs involving the expression. The proof of this assumption rule could certainly be done during the proof of the related theorem involving R and P.
However, this plays against modularity. Lets assume that this type check on sequences is a common scenario on our proofs. Giving an assumption rule provides smaller self-contained proof scripts.
The gSeqPFunType option enables simplifications to take place automatically for a minor number of definitions since it is less generic, whereas the gSeqType option enables a greater number of automatic transformations. One does not always exclude the other, however. In this particular scenario, it does not make bring much advantages because sequences are well-equipped with powerful assumption rules for simple types.
They were given here for the sake of our illustration. Throughout the definitions of our theory however, such trick was used for more effective purposes. Forward rules has a similar role as assumption rules: increase the degree of automation by smoothly discharging type checks. These rules are usually introduced to inform the prover about the types of schema components. Therefore, they are often used whenever theorems involving schemas are necessary.
For instance, during refinement simulation proofs. This is better illustrated with an example. The syntactic restrictions are well documented and the returning error messages are quite helpful. For instance, normal theorems not to be involved on automation schemes are often desired.
The rule of thumb is: theorems that are not applied often should not be considered as rewriting rules. Bare in mind that although a high amount of rewriting rules provides higher levels of automation, the side effect is the degradation of performance. In other words, the bigger the amount of rewriting rules available, the higher the time necessary to check whether their conditional sub-formulae are satisfied.
Another reason for normal theorems arises when the tautology elimination engine automatically re- moves recently applied rewriting rules during transformation tactics. Therefore, the important information just introduced is removed from the assumptions before being used. Fortunately, this undesired scenario does not happen frequently. Finally, there is a useful trick regarding rewriting rules involving generic types. Basically, to allow the prover to infer generic actuals, maximal types must be used.
Therefore, whenever one have a rewriting rule with generic types, the generic actuals given must be maximal in order to avoid problems. Unfortunately, using maximal types as generic actuals is not always possible. For example, whenever the rewriting rule assumptions or conclusions mention expressions with explicit reference to non-maximal types. In cases where information about strictly positive numbers is needed, the generic actual for the type B explicitly mentioned on the conclusions needs to be N1 instead of Z.
Under these circumstances the problems with generic actuals around expressions arise. The solution for this situation is hinted in some theorems of the toolkit. The trick is to universally quantify the generic non-maximal types of interest in the case B as power sets of the correspondent maximal type.
Then, the actual theorem can come next with the quantified types. Since the original generic types are quantified on the corresponding maximal types no harm is introduced.
However, one shortcoming of this trick is that it usually introduces the kind of syntactic restrictions that forbids the theorem to be a rewriting rule. Hence, it must be given as a normal theorem and automation gets compromised. One might argue that it also increases the complexity of the theorem usage due to explicit necessity of instantiations.
The important point is that the solution completely removes problems with generic actuals on the expression of assumptions or conclusions. The user can also explicitly declare an axiom through the usage axiom. During the development of a theory, one might be interested to introduce an well-known theorem as an axiom because there is no available machinery to prove it. Some of these functionalities are not available, and might be hard to implement on the prover front-end.
Therefore, one might simply introduce these theorems as axioms assuming they have already been proved elsewhere. Having some structure on the label given to paragraphs is helpful for learning and indexing. Thus, we introduce some naming conventions inspired from [Art00]. Rewriting rules are prefixed with a r, assumption rules with a g for grule , and forward rules with an f. Then it comes the theorem or declaration name capitalised on important parts. Assumption and forward rules are usually related to theorems for type checks or function results.
Hence, we suffix them with Type and Result respectively; other suffixes are also applied as needed. Whenever specialised non-maximal types are necessary as rules, we give a hint for the choice just before the suffix. So, from the forward rules example, the label of the theorem with relational type for the x component of schema Test is given as fTestXRelType We follow the naming convention to append a prefix d for declarations i.
Further naming conventions are introduced as needed. We call proof commands tactics. For example, it is possible to reset the entire theory, read and undo declarations to and from the theory, retry or move backwards on a proof. The theory management commands are simple and no further comment is provided. Instead, we con- centrate on the commands that actually transforms the goal in some sense.
Understanding the behaviour of those is crucial for an effective use of the automation power available. The tool documentation provides complementary explanations on all proof commands. The apply command is available for any rewriting rule regardless the ability. The tactic is very powerful and user friendly because it makes all necessary instantiations properly i.
Because it can be used either globally, or restricted to a particular expression or predicate, the tactic increases in a great extent both the degree of control sfor the user, as well as the automation level of the prover. Throughout proof scripts in our theory, unless otherwise necessary, global applications were preferred for readability. The alternative is via the use command where instantiations must be made explicitly. It introduces on the assumptions the theorem or declaration being used, and is more likely to appear on direct proofs or particular cases of guided proofs.
The tactic is preferred instead of apply whenever the rewriting rule needs to be applied with a particular instantiation not present on the sub-formulae. Since it just includes conclusions into the assumptions, it is not possible to restrict the application to a particular expression or predicate.
For large proofs this can imply in performance penalties due to rewriting on useless assumptions by further tactics. There are another three very important basic tactics: rearrangement of predicates, and equality and quantifier elimination. As far as we know, the criteria for this ordering is not documented anywhere. The ordering criterion given by the complexity of expressions seems to respect the categories of: i simple formulae, ii equalities, iii inequalities, iv compound formulae, v quan- tifiers, vi logical implication and equivalence, and vii negations.
Formulae of the same criterion are given in ascending alphabetic order. More precisely, 1. Undecorated type formulae i. Undecorated equalities i. Undecorated inequalities i. Decorated versions of the three previous criteria. Compound expressions follow the component binding powers. Existential and universal quantifiers with their parts following these criteria.
Logical implication and equivalence. Negations on any of the previous criteria in ascending order. Nearly at all times, the arrangement ordering criterion is sufficient and few worries are needed. It is suggested by the documentation to rearrange formulae as often as possible; our experience shows some exceptions mainly when one needs to optimise proofs. The point is that rearranging predicates can severely affect the effectiveness of transformation tactics.
It seems that these tactics executes whilst performing a single-parsing through the formulae. Therefore, whenever a simpler sub-formula comes first, the prover is usually enabled to reach more clever conclusions for latter complex sub-formulae. The rule of thumb is: unless otherwise necessary, the sooner rearrange is issue the better. The equality substitute command substitutes any occurrence of the right-hand side of an equality whenever the left-hand side appear later on in the sub-formulae.
However, careful should be taken on the formulae ordering. Like the apply command, it is possible to restrict the equality locally via an expression to substitute. This also allows one to substitute the left-hand side from the right-hand side. Finally, global equality substitution can sometimes be used for very sim- ple transformation with predicate calculus reasoning such as as negation of implication, equivalence, or quantifiers.
For example, after applying equality substitute on formulas such as 1. The final basic tactics are related to quantifiers elimination. There are two forms depending whether it appears at the goal or assumptions. The simplifi- cations performed by the simplify command are equality, integer, and predicate calculus reasoning e.
Simplification is affected by grules and frules whenever their hypothesis matches a sub-formula. The conclusion of these lemmas are then included as assump- tions. However, during some domain checks and complex proofs simplifications not enough and rewriting or reductions are strictly necessary. Rewriting transformations are given by the rewrite command.
It performs simplifications together with automatic application of enabled rewriting rules. For instance, invoke is necessary to expand schema declarations or data types declared as disabled, such as partial functions or injections. These expansions can be applied globally or locally to a predicate.
Reduction is the most complex transformation scheme and is given by the reduce command. It performs rewriting together with further clever, but simple deduction schemes. This leads to the biggest step on the transformation of formulae with the worst performance.
In fact reduction is more than simply expansion together with rewriting. It recursively performs these activities until the formula stops changing. Also, conditional rewriting rules not applied through normal rewriting are used here if their conditions could be reduced as well. The tool offers variations of these tactics. The trivial simplify command limits the simplification by ig- noring equality and integer reasoning, not applying the one-point rule, and not using neither assumptions, nor forward rules.
The trivial rewrite command is equivalent to the application of unconditional rewriting rules together with assumption and forward rules, but without simplification and some predicate calculus reasoning like one-point rule. Trivial reduction is not available.
Trivial tactics are rarely needed. Nonetheless, they allow functionalities not available elsewhere, mainly trivial rewrite. For example, in some proofs where the toolkit definition of cardinality — see [Saa99a, Section Normalisation is available for all three major transformation techniques. Therefore, it is left disabled by default. Indeed, normalisation could be simulated with the related transformation tactic together with stacked cases analysis.
Nevertheless, for relatively large goals, normalisation can cut down the proof enormously taking the most of automatic reasoning and case splitting. The cases command enables one to stack compound goals and prove then separately. Recursive splitting is also possible by multiple case splitting. It is available in two situation: i when a conjunction is present on the goal, or ii when the entire formula is a conditional expression.
0コメント