New Semester
Started
Get
50% OFF
Study Help!
--h --m --s
Claim Now
Question Answers
Textbooks
Find textbooks, questions and answers
Oops, something went wrong!
Change your search query and then try again
S
Books
FREE
Study Help
Expert Questions
Accounting
General Management
Mathematics
Finance
Organizational Behaviour
Law
Physics
Operating System
Management Leadership
Sociology
Programming
Marketing
Database
Computer Network
Economics
Textbooks Solutions
Accounting
Managerial Accounting
Management Leadership
Cost Accounting
Statistics
Business Law
Corporate Finance
Finance
Economics
Auditing
Tutors
Online Tutors
Find a Tutor
Hire a Tutor
Become a Tutor
AI Tutor
AI Study Planner
NEW
Sell Books
Search
Search
Sign In
Register
study help
business
elementary probability for applications
An Introduction To Kolmogorov Complexity And Its Applications 4th Edition Ming Li, Paul Vitányi - Solutions
[39] Let L be a set of natural numbers. We are given an infinite sequence of elements from L and this sequence is typical (that is, random) for one measure in a c.e. or co-c.e. set of halting algorithms for computable measures. Show that there is an algorithm which identifies in the limit a
[43] Let B be a finite alphabet. Show that there exists a lower semicomputable semimeasure W such that for every computable positive measure μ and every μ-random infinite sequence ω we have W(a|ω1:n) →μ(a|ω1:n) with n → ∞, for every a ∈ B.Comments. This W is a universal predictor, but
• [40] (a) Show that there exists an infinite binary sequence ωthat is λ-random (with λ the uniform measure), and is 0-computable, and there exists a universal semimeasure M (that is, a universal monotone Turing machine yielding this measure) such that M(ωn+1|ω1:n) →λ(ωn+1|ω1:n) for n
[38] Define 0-lower semicomputable as lower semicomputable by a Turing machine equipped with an oracle for the halting problem, Section 1.7.2.Here we use monotone Turing machines, since we want to deal with infinite sequences. Define 0 − μ-randomness of infinite binary sequences as previously
[33] (a) Let μ be a positive computable measure, conditionally bounded away from zero as in Definition 5.2.3 on page 366. Show that for every μ-supermartingale t(x) as in Exercise 5.2.4, and for every number k, the set of infinite binary sequences ω such that there exists a number B (a limit)
[19] For every positive measure μ and every μ-supermartingale t(x) as in Exercise 5.2.4, the set of infinite binary sequences ω such that there exists a finite limit of t(ω1:n) has μ-measure one.Comments. Hint: use Claim 4.5.4 on page 327. Source: An.A. Muchnik(personal communication by N.K.
[12] Show that for every semimeasure σ and measure μ the function t(x) = σ(x)/μ(x) is a μ-supermartingale.
[20] Show the nonconvergence of the ratio in Theorem 5.2.2 for off-sequence prediction. Take the set of basic elements B = {0, 1}. Define the measure μ(1|ω1:n−1) = (1 − μ(0|ω1:n−1)) = 1 2n−3.(a) Show that μ(0n) → 0.450 ... for n → ∞, so 0∞ is μ-random.(b) Show that M(0n−1) =
• [27] Define Sn(a) = l(x)=n−1 μ(x)(M(a|x) − μ(a|x))2, with a ∈ B, with B the basic alphabet used in Theorem 5.2.1 on page 360.Let Sn = a∈B Sn(a). This is the summed expected squared difference.(a) Show that if B = {0, 1} then n Sn(0), Sn(1) ≤ 1 2K(μ) ln 2.(b) Show that for
[25] Prove Lemma 5.2.1 on page 359 for the cases in which B is a discrete, possibly nonbinary, alphabet.Comment. Source: [T.M. Cover and J.A. Thomas, Elements of Information Theory, Wiley, 1991, pp. 300–301, attributed to I. Csisz´ar, Studia Sci. Math. Hungar., 2(1967), 299–318, and others].
[43] Sequences with maximal Kolmogorov complexity of the initial segments are random in Martin-L¨of’s sense of passing all effective statistical tests for randomness. Hence, they must satisfy laws like the law of the iterated logarithm.(a) Show that if ω is an infinite sequence such that
[28] Let B be a finite nonempty set of basic symbols. Letδ0(ω|μ) be a universal sequential μ-test for sequences in B∞ distributed according to a computable measure μ. Let φ be a monotone function as in Definition 4.5.3 on page 303, that is μ-regular as in Definition 4.5.5.(a) Show that
[42] Consider a finite or countably infinite basis B, and define a probability function p : B → [0, 1] such that b∈B p(b) ≤ 1. If equality holds, we call the probability function proper. The squared Hellinger distance ρ(q, p) between two probability functions q and p is defined as b∈B q(b)
[15] Show that an infinite sequence ω is random with respect to a computable measure μ iff the probability ratio μ(ω1:n)/M(ω1:n) is bounded below.Comments. Hint: see Theorem 4.5.7.This ratio can be viewed as the likelihood ratio of hypothesis μ(x) and the fixed alternative hypothesis M(x).
[48] The analogue of Theorem 4.3.3 does not hold for continuous semimeasures. Therefore, the inequality in Theorem 4.5.4 cannot be improved to equality.(a) Show that for every upper semicomputable function g : N→N for which Km(x) − KM (x) ≤ g(l(x)), we have Km(n) ≤ g(n) + O(1).(b) Show that
[22] Let μ(x) be a lower semicomputable probability measure.Suppose we define the cooccurrence of events and conditional events anew as follows: The probability of cooccurrence μ(x, y) is μ(x, y) =μ(x) if y is a prefix of x; it is μ(x, y) = μ(y) if x is a prefix of y, and equals zero
[31] (a) Let ν be a probability measure and G(n) = Ef(ω1:n)with f(ω1:n) = log ν(ω1:n) + log 1/Mc(ω1:n)). (Ef(ω1:n) denotes the νexpectation l(x)=n ν(x)f(x).) Show that limn→∞ G(n) = ∞.(b) Let ν be a computable probability measure, and let F be a computable function such that
[15] We compare M(x) and Mc(x). Show that for some infiniteω (such as a computable real) we have limn→∞ M(ω1:n)/Mc(ω1:n) = ∞.Comments. Hint: Since 0.ω is a computable real, limn→∞ M(ω1:n) > 0.
[17] Let ν1, ν2,... be an effective enumeration of all discrete lower semicomputable semimeasures. For each i define γi(x) = y{νi(xy) :y ∈ {0, 1}∗}.(a) Show that γ1, γ2,... is an effective enumeration of a subset of the set of lower semicomputable semimeasures. We call these the
• [27] Let μ be a computable measure over B∞. We use M to estimate the probabilities μ(a|y) for a ∈ B and y ∈ B∗. Show that for every n,l(x)=n μ(x)n i=1 M(u|x1:i−1) ≤ K(μ) ln 2, where we define x1:0 = and M(u|x1:i−1)=1 − a∈B M(a|x1:i−1).Comments. Let μ be an unknown
• [37] What is the difference between semimeasure M and any(not necessarily lower semicomputable) measure μ?(a) Show that 1/M(x) differs from 1/μ(x), for infinitely many x, as the busy beaver function BB(n) differs from n (Exercise 1.7.19 on page 45)for every measure μ.(b) Show the same about
[32] By Exercise 4.5.4, Mnorm dominates M.(a) Show that M does not multiplicatively dominate Mnorm.(b) Show that for each normalizer a defining the measure M(x) =a(x)M(x) we have M(ω1:n) = o(M(ω1:n), for some infinite ω.(c) (Open) Item (b) with ‘all’ substituted for ‘some.’Comments.
• [25] The Solomonoff normalization of a semimeasure μ, with B = {0, 1}, is μnorm(x) = a(x)μ(x), with a(x) as defined in Definition 4.5.7 in Section 4.5.3.We call Mnorm, the normalized version of the universal lower semicomputable semimeasure M, the Solomonoff measure. This leads to an
[27] Even the most common measures can be not lower semicomputable if the parameters are not lower semicomputable. Consider a (p, 1 − p) Bernoulli process and the measure it induces on the sample space {0, 1}∞.(a) Show that if p is a computable real number such as 1 2 or 1 3 or 1 2√2 or π/4
[31] Let the probability that an initial segment x of a binary sequence is followed by a ∈ {0, 1}∗ be M(a|x) = M(xa)/M(x).(a) Show that there is a constant c > 0 such that the probability (with respect to M) of the next bit being 0 after 0n is at least c.(b) Show that there exists a constant c
[09] Show that with basis B = {0, 1}, we have M() < 1 and M(x) > M(x0) + M(x1), for all x in B∗ (strict inequality). Generalize this to arbitrary B.
[12] Show that for each NP-complete problem, if the problem instances are distributed according to m, then the average running time of any algorithm that solves it is superpolynomial unless P = NP.Comments. Source: [M. Li and P.M.B. Vit´anyi, Ibid.].
[12] Show that the m-average time complexity of Quicksort isΩ(n2).Comments. Source: [M. Li and P.M.B. Vit´anyi, Inform. Process. Lett., 42(1992), 145–149].
[43] (a) Show that the minimal length of a program enumerating a set A (prints all elements of A in lexicographic length-increasing order and no other elements; we do not require halting in case A is finite)is bounded above by three times the negative logarithm of the probability that a random
[34] Let X = x1, x2,... be a computable sequence of natural numbers (in N or the corresponding binary strings). The lower frequency of some element x in the sequence is defined as qX (x) = lim inf n→∞1 nd({i : i 0 such that cqU (x) ≥ qX (x), for all x in N .(b) Show that if U and V are
[32] Suppose we want to obtain information about a certain object x. It is not a good policy to guess blindly. The mutual information of two objects x and y was given in Example 3.8.2 on page 253 as I(x; y) = K(y) − K(y|x, K(x)). Show that y m(y)2I (x;y) = O(1).Comments. In words, the expected
• [39] How many objects are there of a given complexity n?How many self-delimiting programs of length n are there? Let g(n) be the number of objects x with K(x) = n, and let Dn be the set of binary strings p of length n such that U(p) is defined. Define the moving average h(n, c)=1/(2c + 1)c
[29] We can also express statistics of description length with respect to C. For every lower semicomputable function f with {f(k) : k ≥1} satisfying the Kraft inequality, there exist fewer than 2k+f(k)+O(1)programs of length C(x) + k for x.Comments. Hint: Consider a machine that assigns a code of
[13] Prove the following: There exists a constant c such that for every k and l, if a string x has at least 2l programs of length k, then C(x|l) ≤ k − l + c.Comments. Therefore, C(x) ≤ k−l+2 log l+c. So if the x has complexity k and there are 2l shortest programs for x (programs of length
[18] Give an example of a computable sequence of rational numbers an > 0 such that the sum n an is finite, but for each other computable (or lower semicomputable) sequence bn > 0, if bn/an → ∞then n bn = ∞.Comments. Hint: Let rn be a increasing computable sequence of rational numbers with
[19] Show that l(x)=n m(x) = m(n), up to a fixed multiplicative constant.Comments. Source: [P. G´acs, Ibid.].
[32] We study the statistics of description length. By the coding theorem, Theorem 4.3.3, we have K(x) = log 1/QU(x) up to an additive constant. Informally, if an object has many long descriptions, then it also has a short one.(a) Let f(x, n) be the number of binary strings p of length n with U(p)
• [32] Let x, y ∈ {0, 1}∗. There are at least two ways to define the conditional discrete universal distribution m(x|y). We used in Definition 4.3.4 the multiplicative domination of every lower semicomputable discrete semimeasure. In turn, a lower semicomputable discrete semimeasure P(x|y)
• [41] Occam’s razor states that the simplest object has the highest probability. In terms of prefix Kolmogorov complexity this is represented by m(x)=2−K(x). This exercise shows that, remarkably, the univeral discrete measure of a finite set is close to univeral discrete measure of the
[28] Show that the universal distribution m has infinite entropy:H(m) = x m(x) log 1/m(x) = ∞, where the summation is over all x ∈ {0, 1}∗.Comments. Hint: By the coding theorem, Theorem 4.3.3 on page 275, it suffices to show that x 2−K(x)K(x) = nl(x)=n 2−K(x)K(x) = ∞.The exercise
[21] Show that the greatest monotonic nonincreasing lower bound on the universal distribution m (universal lower semicomputable discrete semimeasure) converges to zero more slowly than the greatest nonincreasing monotonic lower bound on any positive computable function that goes to zero in the
[15] Show that the class of computable measures does not contain a universal element.
[12] Show that x 2−K(x|y) ≤ 1.Comments. Hint: use the Kraft inequality, Theorem 1.11.1.
[08] Show that if K(x) ≤ log x then x i=1 2−K(i) ≤ x2−K(x).Comments. Hint: use Theorem 4.3.3.
[14] (a) Let U be the reference prefix machine of Theorem 3.1.1 on page 206. Define P(x) = U(p)=x 2−l(p). Show that x P(x) ≤ 1, so P(x) qualifies as a probability mass function over the integers. (We use the term ‘probability mass function’ loosely here for nonnegative real-valued
[18] Let μ be a semimeasure over B∗. Show that if μ is computable, then we can find an algorithm to compute μ(x) and b∈B μ(xb), for all x ∈ B∗, to any degree of accuracy.Comments. These properties are implicitly used throughout Section 4.5 on continuous semimeasures. Source: [V.G.
[27] (a) Show that the mutual information I(x; y) = K(x) +K(y) − K(x, y) according to Equation 3.20 on page 253 is symmetric:I(x; y) = I(y; x).(b) Show that the mutual information in Item (a) coincides with the mutual information I(x : y) = K(y)−K(y|x) according to Equation 3.15 on page 249 up
[25] (a) Show that C(a,b) = K(a|C(a, b)) + C(b|a, C(a, b)) +O(1).(b) Show that if a = then C(b) = C(b|C(b)) + O(1).(c) Show that if b = then C(a) = K(a|C(a)) + O(1) and this implies C(a|C(a)) = K(a|C(a)) + O(1).Comments. This governs the additivity for plain Kolmogorov complexity. One can use
[12] Let n = l(x) and K(x) = n + K(n) + O(1). Show that K(x, n) = K(x) + K(n|x) = K(n) + K(x|n) = K(x, n∗) up to additive constants.Comments. This relates to the symmetry of information issue for K.The proof we gave that Theorem 2.8.2 on page 192 is sharp for C does not hold for K. Hence, to
[20] Define Chaitin’s conditional complexity as in Example 3.8.2 by Kc(x|y) = K(x|y∗) with y∗ = y,K(y) , or y∗ is the first enumerated shortest program for y. When there is more than one object in the conditional then define Kc(x|y, z) := Kc(x|y, z ) := K(x|y, z ∗), and so on. Define
[12] Let x∗ be the first enumerated shortest program for x.Show that x∗ and x, K(x) contain the same information: K(x∗) =K(x, K(x) ) + O(1).
[22] Show that {K(x) : x = 1, 2,...} is the length set of an additively optimal universal code in the sense of Section 1.11.1.
[29] Let ω be an infinite binary sequence that is Martin-L¨of random with respect to the uniform measure (for example, the halting probability Ω of Section 3.5.2).(a) Show that ω is von Mises–Wald–Church random (Section 1.9).(b) Show that ω is effectively unpredictable. That is, let f be a
• [42] Does a similar result to that in Exercise 3.5.19 hold for prefix complexity? We know that the maximal complexity of K(x) is K+(x) = l(x) + K(l(x)) + O(1), Example 3.2.2 on page 213. Moreover, if K(x) = K+(x) is maximal then C(x) = C+(x) by Exercise 3.3.4 on page 218. An infinite binary
[37] An infinite binary sequence ω is n-random if it is MartinL¨of random in ∅n−1. That is, 1-randomness is Martin-L¨of randomness.Show that ω = ω1ω2 ... is 2-random iff C(ω1:n) ≥ n − O(1) for infinitely many n.Comments. This interprets and explains Theorem 2.5.6 on page 154.Some
[37] In Exercises 3.5.16 and 3.5.17 we gave natural examples of lower semicomputable Martin-L¨of random infinite binary sequences(or reals). They form as it were the fringe, the lowest, first, order of Martin-L¨of randomness. These objects are random with respect to the primary notion of
[41] Let U0, U1,..., Uk ⊆ {0, 1}∞ for all k ≥ 0 denote a sequential universal Martin-L¨of test as in Section 2.5.2.Let λ denote the uniform measure on {0, 1}∞. Note that U0 = {0, 1}∞ by definition andλ(U0) = 1. Lower semicomputable reals are defined in Exercise 3.5.15.A sequence r1,
• [37] Consider the family U of universal prefix machines U satisfying the conditions of the proof of Theorem 3.1.1 on page 206. Every such U has an associated halting probability ΩU = U(p)
[41] Consider an infinite binary sequence ω as a real number r = 0.ω. Recall that ω is lower semicomputable if there is a total computable function f : N→Q such that f(i + 1) ≥ f(i) and limi→∞ f(i) = r. We call ω an Ω-like real number if (i) there is a lower semicomputable function g
[29] Let ω = ω1ω2 ... be an infinite binary sequence and let c(ω) be the smallest c for which K(ω1:n) ≥ n −c. Let ω be Martin-L¨of random with respect to the uniform distribution. Let S(n) = n i=1 ωi.(a) Show that given > 0, we can compute an n(c, ) such that(b) Show that for given
[38] Let ω be an infinite binary sequence that is random with respect to the uniform measure. Let g be a computable function such that the series n 2−g(n) diverges, for example, g(n) = log n. Let h be a computable function that is monotone and unbounded, such as h(n) = log log n. Show that for
[31] Let ω be an infinite binary sequence. Show that if there exists a constant c such that K(ω1:n) ≥ n − c for all n, then for all k we have K(ω1:n) − n ≥ k from some n onward.Comments. Hint: This follows easily from Exercise 3.5.4 Item (a), and can be seen as a strengthening of that
[27] Far from being a nuisance, the complexity oscillations actually enable us to discern a fine structure in the theory of random sequences. A sequence ω is Δ0 2 definable if the set {n : ωn = 1} is Δ0 2 definable, Exercise 1.7.21 on page 46. We consider infinite binary sequences that are Δ0
• [46] A set A ⊆ N is K-trivial if its characteristic sequenceχ = χ1χ2 ... satisfies K(χ1:n) ≤ K(n)+O(1). Certain properties of such sequences were already established in Exercise 3.5.9, Items (b) and (c).Here we identify sets with their characteristic sequences.(a) Show that all
• [35] Recall that a sequence ω is computable iff C(ω1:n) ≤C(n) + O(1), Example 2.3.4.If ω is computable, then for all n we have K(ω1:n|n) = O(1) and K(ω1:n) ≤ K(n) + O(1).(a) Show that if K(ω1:n|n) ≤c, for some c and all n, then ω is computable.(b) Show that if K(ω1:n) ≤ K(n) +c,
• [39] Let ω = ω1ω2 ... and ζ = ζ1ζ2 ... be two infinite binary sequences. Let ω ⊕ ζ = η mean that η2i = ωi and η2i+1 = ζi, for all i.(a) Show that ω ⊕ζ is random in the sense of Martin-L¨of iff ζ is random in Martin-L¨of’s sense and ω is Martin-L¨of random in ζ (that
[43] Let ω be an infinite binary sequence. Show that the following are equivalent to the sequence ω being random in Martin-L¨of’s sense(with respect to the uniform distribution):(a) C(ω1:n) ≥ n − K(n) ± O(1) for every n.(b) γ0(ω1:n|L) = n−C(ω1:n|n)−K(n)+O(1) is finite with L the
[39] We improve on Exercise 3.5.3, Item (b). As usual, n∗ denotes the shortest program for n, and if there is more than one, then the first one in standard enumeration. Let f be a function.(a) Show that if n 2−f(n)−K(f(n)|n∗) < ∞, then K(ω1:n) ≥ n+K(n)−f(n) for all but finitely many
[36] Let f be a function. (a) Show that if the series n 2−f(n)converges, then there is a Martin-L¨of random sequence ω such that K(ω1:n) ≤ n + f(n) + O(1), for all n.(b) Show that n 2−f(n) = ∞ iff for every Martin-L¨of random sequenceω there are infinitely many n such that K(ω1:n) >
[26/38] Let ω be an infinite binary sequence.(a) Show that ω is Martin-L¨of random iff n 2n−K(ω1:n) < ∞.(b) Let f be a (possibly incomputable) function such that n 2−f(n) = ∞. Assume that ω is random in Martin-L¨of’s sense. Then, K(ω1:n) >n + f(n) − O(1) for infinitely many
[29] We investigate the complexity oscillations of K(ω1:n) for infinite binary sequences ω. If ω is an infinite sequence that is random in the sense of Martin-L¨of, then these oscillations take place above the identity line, K(ω1:n) ≥ n + O(1), for all but finitely many n. The maximal
[23] Let 1 < r, s < ∞ be integers. Show that a real number in the unit interval [0, 1] expressed in r-ary expansion (equivalently, infinite sequence over r letters) is Martin-L¨of random with respect to the uniform distribution in base r iff it is random expressed in s-ary expansion with respect
[21] Let A be an infinite computably enumerable set of natural numbers. Show that if we define θ = n∈A 2−K(n), then K(θ1:n) ≥n − O(1) for all n. (Therefore, θ is a random infinite sequence in the sense of Martin-L¨of by Schnorr’s theorem, Theorem 3.5.1.)Comments. It follows that θ
[39] (a) Show that l∗(x) = log x + log log x + ··· (all positive terms) satisfies x 2−l∗(x) < ∞.(b) Show that for all x we have K+(x) ≤ l∗(x) + O(1) (K+ as in Exercise 3.3.1).(c) Show that for most x we have K+(x) = l∗(x) + O(1).Comments. Hint for Item (b): use Item (a). Source:
[36] (a) Use the notation of Exercise 3.3.1.Show that there are infinitely many x such that if K(x) = K+(x), then C(x) = C+(x).(b) Show that for some constant c ≥ 0 there exist infinitely many x(l(x) = n) with C(x) ≥ n − c and K(x) ≤ n + K(n) − log2 n + c log3 n.(c) Show that for some
[12] Show that x 2−K(x|l(x)) does not converge.
[20] Analyze the integer function K(x|n) with n = l(x).(a) Show that there is a constant c such that there are infinitely many x such that K(x|n) ≤ c.(b) Let h = n − C(x|n). Show that K(x|n) ≤ C(x|n) + K(h|n) + O(1).(c) Use Item (b) to show that K(x|n) ≤ n + O(1) for all x.(d) Show that
[10] Let C+(x) := max{C(y) : l(y) = l(x)}, and K+(x) :=max{K(y) : l(y) = l(x)} as in Example 3.2.2 on page 213.(a) Show that C+(x) = log x + O(1).(b) Show that K+(x) = log x + K(log x) + O(1).
[24] Let φ : {0, 1}∗ → N be a prefix algorithm, that is, a partial computable function with a prefix-free domain. Then the extension complexity of x with respect to φ is defined by Eφ(x) = min{l(p) : x is a prefix of φ(p)}, or Eφ(x) = ∞ if there is no such p.(a) Show that there is an
[36] (a) Show that x,y,z 2−K(x,y,z) ≤ 1.(b) Show that 2K(x, y, z) ≤ K(x, y) + K(x, z) + K(y, z) + O(1) for all strings x, y, z with K(x), K(y), K(z) ≤ n(c) Let X, Y, Z be finite sets. Let f : X × Y → R, g : Y × Z →R, and h : Z × X → R be functions with nonnegative values.
• [34] Show that C and K do not agree, to within any given additive constant, on which strings are more complex. Formally, show that for every positive integerc, there are strings x, y such that both C(x) − C(y) ≥ c and K(y) − K(x) ≥ c.Comments. Source: attributed to An.A. Muchnik in
[32] Show that there is a constant c such that for every d we have that if K(x) > l(x) + K(l(x)) − (d −c) then C(x) > l(x) − 2d.Comments. This shows that if we can compress the K-complexity of some x less than d−c below l(x)+K(l(x)), then we can compress the Ccomplexity less than 2d below
• [46] How are K and C precisely related? Is K just a pumpedup C-version? The following formulas relate C and K:(a) Show that K(x) = C(x) + C(C(x)) + O(C(C(C(x)))).(b) Show that C(x) = K(x) − K(K(x)) − O(K(K(K(x)))).(c) Show that C(C(x)) − K(K(x)) = O(K(K(K(x)))).(d) Show that K(K(K(x)))
• [41] (a) Show that d({x : l(x) = n, K(x) < n − K(n) − r}) ≤2n−r−K(r|n∗)+O(1).(b) Show that there is a constant c such that if string x of length n ends in at least r + K(r|n∗) + c zeros then K(x) < n + K(n) − r.(c) Show d({x : l(x) = n, K(x) < n−K(n)−r}) ≥
[17] The following equality and inequality seem to suggest that the shortest descriptions of x contain some extra information besides the description of x.(a) Show that K(x, K(x)) = K(x) + O(1).(b) Show that K(x|y, i − K(x|y, i)) ≤ K(x|y, i).Comments. These (in)equalities are in some sense
[11] Show that K(x) ≤ C(x) + C(C(x)) + O(log C(C(x))).
[15] Let φ(x, y) be a computable function.(a) Show that K(φ(x, y)) ≤ K(x) + K(y) + cφ, where cφ is a constant depending only on φ.(b) Show that (a) does not hold for C-complexity.Comments. Hint: In Item (b) use the fact that the logarithmic error term in Theorem 2.8.2, page 192, cannot be
[15] Show that with n = l(x), we have C(x|n) ≤ K(x) ≤C(x|n) + l∗(C(x|n)) + l∗(n) + O(1).Comments. Hint: This is an easy consequence of Equation 3.1. Source:[S.K. Leung-Yan-Cheong and T.M. Cover, IEEE Trans. Inform. Theory, IT-24(1978), 331–339].
[17] Show that K(x) ≤ K(x|n, K(n)) + K(n) + O(1), for n =l(x).Comments. Hint: K(x) ≤ K(x, n)+O(1). It is easy to see that K(x, n) ≤K(x|n, K(n)) + K(n) + O(1). Source: [G.J. Chaitin, J. Assoc. Comp.Mach., 22(1975), 329–340].
[32] Show that Kamae’s result, Exercise 2.7.6 on page 185, does not hold for K(x|y).Comments. Source: [P. G´acs, Lecture Notes on Descriptional Complexity and Randomness, Manuscript, Boston University, 1987].
[22] Show that neither K(x) nor K(x|l(x)) is invariant with respect to cyclic shifts. For example, K(x1:n) = K(xm+1:nx1:m) + O(1)is not satisfied for all m, 1 ≤ m ≤ n.Comments. Hint: choose x = 10 ... 0, l(x)=2k.
[27] Let K+(x) be defined as in Example 3.2.2 on page 213. We know that K+(x) = n + K(n) + O(1).(a) Show that there are infinitely many n, m, x of length n and y of length m with n n + log n + log log n and K+(y) < m + log log m.(b) Show that K+(x) = max{K(y) : y ≤ x} + O(1), where ≤ is with
[39] Giving a definition of complexity of description, we use computable decoding functions from the set of finite binary strings to itself.However, we can consider this set with different topologies. If we consider distinct binary strings as incomparable, then this set corresponds to the natural
• [31] We can derive K(x) in another way. Define a complexity function F(x) to be a function on the natural numbers with the property that the series 2−F (x) ≤ 1 and such that F(x) is upper semicomputable. That is, the set {(m, x) : F(x) ≤ m} is computably enumerable. (If F(x) = ∞, then
• [19] Do Exercise 2.1.11 on page 114 for K-complexity.
[16] Let us backtrack to the original definition of a Turing machine in Section 1.7. Instead of a tape alphabet consisting of 0, 1, and the blank symbol B, we want to consider Turing machines with a tape alphabet consisting only of 0 and 1. We can modify the original effective enumeration T1,
[14] We investigate transformations of complexity under the arithmetic operation ∗. Show that(a) K(x ∗ y) ≤ K(x) + K(y) + O(1);(b) If x and y are primes, then K(x ∗ y) = K(x, y) + O(1);(c) K(x ∗ y) + log(x ∗ y) ≥ K(x, y) + O(1);Comments. Item (c): consider the prime factorization of z
[12] Use the Kraft inequality, Theorem 1.11.1, to show that K(x) ≥ log x + log log x for infinitely many x.
[42] We can express the incomputability of C(x) in terms of C(C(x)|x), which measures what we may call the complexity of the complexity function. Denote l(x) by n.(a) Prove the upper bound C(C(x)|x)) ≤ log n + O(1).(b) Prove the following lower bound: For each length n there are strings x such
[36] Show that 2C(a,b, c) ≤ C(a, b)+C(b, c)+C(c, a)+O(log n)where n = l(abc).Comments. For an application relating the three-dimensional volume of a geometric object in Euclidean space to the two-dimensional volumes of its projections, see the discussion in Section 6.14 on page 544. Hint:use the
Showing 600 - 700
of 3340
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
Last
Step by Step Answers