New Semester
Started
Get
50% OFF
Study Help!
--h --m --s
Claim Now
Question Answers
Textbooks
Find textbooks, questions and answers
Oops, something went wrong!
Change your search query and then try again
S
Books
FREE
Study Help
Expert Questions
Accounting
General Management
Mathematics
Finance
Organizational Behaviour
Law
Physics
Operating System
Management Leadership
Sociology
Programming
Marketing
Database
Computer Network
Economics
Textbooks Solutions
Accounting
Managerial Accounting
Management Leadership
Cost Accounting
Statistics
Business Law
Corporate Finance
Finance
Economics
Auditing
Tutors
Online Tutors
Find a Tutor
Hire a Tutor
Become a Tutor
AI Tutor
AI Study Planner
NEW
Sell Books
Search
Search
Sign In
Register
study help
business
elementary probability for applications
An Introduction To Kolmogorov Complexity And Its Applications 4th Edition Ming Li, Paul Vitányi - Solutions
[30] Prove the slow growth law of Example 7.7.6 on page 615.
[25] Depth is machine-independent: If U and U are two different reference universal machines, prove that there exists a polynomial p and a constantc, both depending only on U and U, such that (p(d), b + c)-deepness on either machine is a sufficient condition for (d, b)-deepness on the other.
[30] Prove the following ‘stability property.’ Deep strings cannot be quickly computed from shallow ones. More precisely, there is a polynomial p and a constantc, both depending only on the universal machine U, such that, if q is a program to compute x in d steps and if q is less than (d,
[28] Deep strings are not easy to identify, but can be constructed by diagonalization. Prove that the following program finds a string which is (d, b)-deep for every significance parameter b ≥ n − K(d) − O(log n):“Find all x of length n such that Qd U (x) > 2−n; print the first string
[22] Strengthen the Theorem 7.7.1, Item (ii), to “If x is (d, b)-deep then depth1/b (x) = d with b ≤ b + min{K(b), K(d)} + O(1).”Comments. Hint: Given x∗ and b∗, enumerate all programs in order of increasing running time and stop when the accumulated algorithmic probability measure
[10] Show that a DNA sequence is not (Kolmogorov) random.Consider sequences, over {A, C, G, T }, such that starting from any position 3k + 1, for k = 0, 1,..., in the sequence, the next three letters must be one of 22 combinations (corresponding to 20 amino acids, and begin, end instructions).
[34] Let T be a constructible time function. Show that for every time-constructible function t we have T (x)=2O(Kt(x)−K(x)+log n) iff T is polynomial time on mt-average in L.A. Levin’s sense, that is, there exists a k such that x T 1/k(x)mt(x)/l(x) < ∞.Comments. Source: [L. Antunes, L.
[36] A probability mass function P is P-computable if it satisfies Definition 7.6.1 with the time t(n) a polynomial in n. A probability mass function P is P-samplable if there is a probabilistic Turing machine T that on input k computes a string x such that |Pr(T (k) = x) −P(x)| ≤ 1/2k and T
• [O46] Use the terminology of Exercise 7.6.3.The relation Qt(x) ≥ mt(x) is obvious. Show whether Qt(x) = O(mt(x)) with or without a complexity-theoretic assumption. If necessary relax the question as in Item (b) of Exercise 7.6.3.Comments. Here we ask for the (non)existence of a time-bounded
[26] Recall Definition 4.3.5 on page 274. The t-time-bounded universal a priori probability is defined as Qt U (x) = Ut(p)=x 2−l(p), and Ut(p) = x means that U computes x in at most t(l(x)) steps and halts.Let < denote the standard lexicographic length-increasing ordering on the strings.(a)
[18] (a) Let l(x) = n. Show that probabilities mt(x)=2−Kt(x)for the set of all x’s with Kt(x) = O(log n) can be computed in time polynomial in t(n).(b) Use Item (a) to show that one can precompute mt(x) with x ∈ A for the set A of high-probability x’s (Kt(x) = O(log n)) in polynomial time
[35] Call a probability distribution P : N→R malign for a class of algorithms if each algorithm in the class runs in P-average time(space) equal to worst-case time (space). In Section 4.4 it was shown that m is malign for the computable algorithms. As usual P∗(x) =y≤x P(y) and the function
[31] Show that there there is an algorithm FASTSEARCH that does the following: Let A : X → Y be a given algorithm or specification.Let B be an algorithm, computing provably the same function as A with computation time provably upper bounded by the function tB(x). Let time tB(x) be the time needed
[27] Similar to Definition 7.5.1, one can define the Ct version of Kt. Below we set n = l(x).(a) Show that Ct(x) ≤ s(n) implies x ∈ C[s(n), 2s(n), ∞], and the last formula implies Ct(x) ≤ 2s(n).(b) For a CFL L, let CL(n) = min{Ct(x) : x ∈ L=n}. Show that CL(n) =O(log n).(c) Define CL(n) =
[30] Show that symmetry of information, Theorem 2.8.2 on page 192, does not hold for Kt.Comments. Hint: For each c and large n, by diagonalization find strings x, y ∈ {0, 1}n such that Kt(x) > n 2 and Kt(y|x) > n 2 , but Kt(xy) ≤ n 2 + O(log n). Source: [D. Ronneburger, Kolmogorov complexity
[39] Use the definition of ic in Exercise 7.4.7.(a) Show that if ic(x : A) ≤ log C(x)−1 for all but finitely many x, then A is computable.(b) There is an incomputable computably enumerable set A and a constant c such that ic(x : A) ≤ log C(x) + c for all but finitely many x.Comments. Item (b)
[33] In Definition 7.4.1 on page 591, when we allow t to be an arbitrary finite time, we will remove t from ict and simply write ic.(a) Let R = {x : C(x) ≥ l(x)}. We know that R is infinite and that it contains at least one string of each length. Show that there is a constant c such that for
[35] Let t be a computable time bound. There is a computable set A such that f(x) = ict(x : A) is not computable.Comments. This result, due to L. Fortnow and M. Kummer [Ibid.], was originally conjectured by P. Orponen, K. Ko, U. Sch¨oning, and O.Watanabe, [Ibid.].
[38/O43] Consider Exercise 7.4.4 on page 595 and Lemma 7.4.2.Let us say that a set A has p-hard instances if for every polynomial t there exists a polynomial t and a constant c such that for infinitely many x we have ict(x : A) ≥ Ct(x) − c.(a) (Open) Prove or disprove: Every computable set A
[31] There are sets with hard instance complexity everywhere.In particular, prove that there is a set A computable in 2O(n) time such that for some constant c and for all x, icexp(x : A) ≥ Cexp(x) − 2 log Cexp(x) − c, where exp(n)=2n and exp(n) = O(n22n).
[32] The class P/poly is defined in Exercise 7.2.6 on page 576.Use that exercise to analogously define the class P/log = c>0 P/c log n.Show that P/log is properly contained in IC[log, poly], which is in its turn properly contained in P/poly.
[30] The proof of Theorem 7.4.2 depends on the so-called selfreducibility of the SAT problem. A set is self-reducible if the membership question for an element can be reduced in polynomial time to the membership question for a number of shorter elements. For example, SAT is self-reducible, since an
[25] Let t(n) be computable in time t(n), t(n) = ct(n) log t(n)+c, for some constantc. For set A, string x, and some constantc, prove:(a) ict(x : A) ≤ Ct(x) +c, and(b) ict(x : A) ≤ CDt(x) + c.Comments. Item (a) and the next three exercises are from [P. Orponen, K. Ko, U. Sch¨oning, and O.
(a) [35] Improve Theorem 7.3.6 by proving BPPR= PR, where R = {x : C(x) ≥ l(x)/i} for i a positive integer.(b) [33] Show that PSPACE ⊆ PR and NEXP ⊆ NPR, with R as in Theorem 7.3.6.(c) [O38]) Is there a larger complexity class DTIME(t), NTIME(t), or DSPACE(s), for some t or s, that is
[43] (a) Show that there exists an algorithm that on input a string x of length n and a rational number δ > 0 outputs a list of strings of length n such that the size of the list is O(n2)p (with p a polynomial in 1/δ) satisfying the following property: if C(x) < n−log log n−O(1) then a (1 −
[42] If one does not care about decompression time then there exist fast and almost optimal probabilistic algorithms for compression.Let x be a string of length n.(a) Show that for each Turing machine U, for each computable time boundf, each algorithm that for all x maps (x, C(x)) to a program for
[43] Let U be a standard machine as in Exercise 7.3.15.(a) Show that there exists a probabilistic algorithm that on input a string x of length n and a rational number δ satisfying 0
[43] A universal Turing machine U is a standard Turing machine, or standard machine for short, if for every other Turing machine V there is a polynomial time function f such that f(p) ≤ l(p) + O(1)and V (p) = U(f(p)) (when defined).(a) For every standard machine U there exists a constant c and a
[44] (a) Show that there is a polynomial-time computable function which with input a string x of length n outputs a set of strings (all of length n and at most polynomially many) such that if C(x) < n then the set contains a string y such that C(y) > C(x).(b) Show that if C(x|n) < n then there is a
[41] (a) Show that every non-random string x of length n is within Hamming distance O(√n) of a string y (l(y) = n) such that C(y) > C(x).(b) Show that this is optimal since the Hamming distance in Item (a) isΩ(√n) for some strings.Comments. Source: [H.M. Buhrman, L. Fortnow, I. Newman, and
[32] We say that A is truth-table reducible to B if there are functions g1,...,gm and a Boolean function f where yi is true iff gi(a) ∈B, and f(y1,...,ym) is true iff a ∈ A. We consider only polynomial-time truth-table reductions where f and the gi’s are computable in polynomial time. It is
[33] Show that for all t ≥ 2, the set C[tlog n, nt, ∞] is Pisomorphic to {0}∗.Comments. Source: [E. Allender and O. Watanabe, Inform. Comput., 86(1990), 160–178]. In this paper, the authors also use sets C[tlog n, nt, ∞]to study the equivalent classes of tally sets under various types of
[39] Show that if A is a set whose characteristic sequence is a random infinite binary sequence in the sense of Martin-L¨of (Section 2.5), then PA = NPA.Comments. This result is presented in a more general setting using results of Section 2.5 by R.V. Book, J.H. Lutz, and K.W. Wagner in
[32] The method using resource-bounded Kolmogorov complexity to construct oracles in Section 7.3.1 can be used to obtain many more oracles. DefineNotice that EXPTIME is different from the class E. Let NPSPACE stand for the nondeterministic version of PSPACE. Let us consider oracle Turing machines
[34] We construct a sparse random oracle set A as follows: For every n, n = 1, 2,... , toss a fair coin. If the result is ‘tails,’ then we do not include any string of length n in A; if the result is ‘heads,’ then we toss the coin n times and place the resulting binary string (the ith bit
[33] Show that the set {0, 1}∗−C[log n, ∞, 2n] is DSPACE[2cn]-immune for every c < 1.Comments. Compare this exercise with Theorem 2.7.1.
[29] Use Kolmogorov complexity to show that there exists a computable oracle A such that NPA has PA-immune sets.
[30] Show that P = NP iff for all oracles A ⊆ C[log n, ∞, n2] we have PA = NPA.
[40] Prove the following:(a) There is a sparse set in NP − P iff NE = E.(b) Define ΔE 2 analogous to Δp 2 (Definition 1.7.10 on page 40). If NE =ΔE 2 , and every sparse set in NP is polynomial-time many-to-one reducible(Definition 1.7.8 on page 39) to SAT C[log n, n2, ∞], then NE =
[26] If C[log n, nlogn, ∞] SAT ⊆ A0 ⊆ SAT and A0 ∈ P, then E = NE.
[29] Let g(n) ≤ n be an unbounded, monotonically increasing function and let G(n) be such that for every k, limn→∞ nk/G(n)=0.Show that C[g(n), G(n), ∞] SAT ⊆ A0 ⊂ SAT and A0 ∈ P implies that SAT is not P-isomorphic to SAT − A0. Also show that SAT − A0 is NP-complete.
[28/O43] Let SAT be the set of satisfiable Boolean formulas.By Definition 7.2.2, a set A is sparse if there is a constant c such that for all n we have d(A=n) ≤ nc+c. In Section 1.7.4 we defined that a set B is polynomial-time Turing reducible to set C, denoted by B ≤P T C, if there is a
[35] Let χL = χ1χ2 ... be the characteristic sequence for language L ⊆ {0, 1}∗ such that χi = 1 iff the lexicographically ith word wi is in L. As before, L 1,> 0.(c) There is a language L ∈ SPACE[2O(n)] such that for all but finitely many n, we have C∞,2n(χL 2n−2.(d) Use Item (c) to
[40] Is it possible that #P problems have solutions of low timebounded Kolmogorov complexity relative to the input? Prove that if there is a polynomial-time Turing machine that on an input that is a Boolean formulaf, prints out a polynomial-sized list of numbers among which one number is the number
[30] A problem is in #P if there is a nondeterministic Turing machine such that for each input, the number of distinct accepting paths of the Turing machine is precisely the number of solutions for the problem for this input. #P-complete problems are defined (analogously to NP-complete problems) as
[28] Let rL be the ranking function of L. Show that if rL is polynomial-time computable, then so is r−1 L .Comments. Source: [A. Goldberg and M. Sipser, SIAM J. Comput., 20(1991), 524–536].
[28] Let f be a function on the natural numbers. Let Σ = {0, 1}.A set A belongs to the class P/f if there exist another set B ∈ P and a function h: N → Σ∗ such that for all n we have l(h(n)) ≤ f(n); and for all x we have x ∈ A iff x, h(l(x)) ∈ B. Define P/poly = c>0 P/nc.Prove that
[30] There is an infinite set A such that for every polynomial p, CDp(x|A) ≥ l(x)/5 for almost all x ∈ A.Comments. A corollary is that A has no sparse subsets in PA. Source: [L.Fortnow and M. Kummer, Theoret. Comp. Sci. A, 161(1996), 123–140].
[38] For every set A in P, for all constants α, > 0, there is a polynomial p such that for all n and for all but an fraction of the x ∈ A=n, CDp(x) ≤ min{log d(A=n) + logO(1)(n),(1 + α) log d(A=n) + O(log n)}.Comments. Source: [H.M. Buhrman, L. Fortnow, and S. Laplante, SIAM J. Comput.
[35] For every polynomial p and sufficiently large n, there exists a set of strings A ⊆ {0, 1}∗ such that A=n contains more than 2n/50 strings and there is an x ∈ A=n with CDp(x|A=n) ≥ 2 log d(A=n) − O(1).Comments. This and Exercise 7.2.4 answer the open question posed in Exercise 7.2.3
Source: [H.M. Buhrman and L. Fortnow, Proc. 14th Symp. Theoret. Aspects Comput. Sci., Lect.Notes Comput. Sci., Springer-Verlag, 1997] where a connection is given between this exercise and [L. Valiant and V. Vazirani, Theoret. Comput.Sci., 47(1986) 85–93].
[30] (a) Show that for 0 ≤ x
[20] (a) Show that a set S is sparse iff for all x ∈ S, CDp(x|S) ≤O(log l(x)) for some polynomial p.(b) Show that set S ∈ P is sparse iff for all x ∈ S, CDp(x) ≤ O(log l(x))for some polynomial p.Comments. Use Theorem 7.2.1.Source: [H.M. Buhrman and L. Fortnow, Proc. 14th Symp. Theoret.
[25] Kolmogorov complexity arguments may be used to replace diagonalization in computational complexity. Prove the following using Kolmogorov complexity:(a) If limn→∞ s(n)/s(n) → 0, and s(n) ≥ log n computable in space s(n), then DSPACE[s(n)] − DSPACE[s(n)] = ∅.(b) If limn→∞
[25] Show that C[n, ∞, c] C[log n, ∞, c log n] = ∅ for some c and n large enough. Can you formulate other tradeoff results (between complexity, time, and space)?
[27/O35] Let T (n), f(n) be functions both computable in time T (n). Prove that C[f(n), T (n), ∞] ⊂ C[f(n), c2f(n)T (n), ∞], for some constantc. Open problem: Can we establish a tighter version of this time hierarchy without an exponential-time gap in the hierarchy?Comments. Source: [L.
[O45] Given a string x of length n decide if x can be generated from a program of length k in time t(n). Is this question NP complete assuming t(n) is polynomial?Comments. If this question can be answered in polynomial time for t(n)a polynomial, then one can run the algorithm for k = 1,...,n in
[36] We investigate the open problem of Exercise 7.1.12, Item(c), further. We call this problem ‘polynomial-time symmetry of information.’ Let p(n) be a polynomial. We say that a subset of {0, 1}∗ contains almost all strings (of {0, 1}∗) if for each n it contains a fraction of at least 1
• [27/O48] In Section 3.8.1, we proved various versions of a symmetry of information theorem, with no resource bounds. For example, up to an O(log l(xy)) additive term, C(x, y) = C(x) + C(y|x). Here C(x, y) = C(x, y ), where ·, · is the standard pairing function.(a) The symmetry of information
[38] Use the terminology of Example 7.1.3.Prove that there exists a pspace-pseudorandom infinite sequence ω such that ω1:n is computable in 22n space.Comments. Hint: prove this in two steps. First, prove that there is an infinite sequence ω, computable in double exponential space, such that
[30] Let f(n) be computable in polynomial time and satisfy the condition 2−f(n) = ∞. Then for every infinite sequence ω, there is a polynomial p such that for infinitely many n, we have Cp(ω1:n|n) ≤n − f(n).Comments. This is a polynomial-time version of Theorem 2.5.1 on page 143. Source:
[29] Call χ polynomial-time computable if for some polynomial p and Turing machine T , the machine T on input n outputs χ1:n in time p(n). Prove that χ is polynomial-time computable if and only if for some polynomial p and constant c for all n we have Cp(χ1:n; n) ≤c, where C(·; ·) is the
[40] Use the uniform complexity of Exercise 7.1.7.We consider a time–information tradeoff theorem for resource-bounded uniform complexity. Let fi = n1/i, for i = 1, 2,... . Construct a computable infinite sequence ω and a set of total computable, nondecreasing, unbounded functions {ti}, where
Source: [R.P. Daley, Inform. Contr., 23(1973), 301–312]. The fact that some Mises–Wald–Church random sequences have low Kolmogorov complexity is from [R.P.Daley, Math. Systems Theory, 9(1975), 83–94].
[40] Uniform complexity was defined in Exercise 2.3.2, page 130.Here we define the time-bounded uniform complexity. For an infinite string ω and the reference universal Turing machine U, Ct(ω1:n; n) = min{l(p) : ∀i ≤ n[Ut(p, i) = ω1:i]}.Define CU [f, t, ∞] = {ω : ∀∞n[Ct(ω1:n; n) ≤
[35] Show that there is a computably enumerable set A with characteristic sequence χ such that for all total computable functions t, and for all 0 cn.Comments. Compare this exercise with Theorem 7.1.3.R.P. Daley [Inform. Contr., 23(1973), 301–312] proved a more general form of this result using
[39] Prove that there is a computably enumerable set A with characteristic sequence χ such that for every total computable majorantφ of C and every n, we have φ(χ1:n|n) ≥ cφn, where cφ is a constant independent of n (but dependent on φ).Comments. Use the proof of Theorem 7.1.3 and also
[O46] We resume the discussion in Example 7.1.1.(a) Is there an efficient invariance theorem for prefix Kolmogorov complexity based on the partial computable prefix function definition, Definition 3.1.2 on page 204? Efficiency here means simulation of a machine in the enumeration of the partial
[25] Prove the results of Theorem 7.1.2 for KDs, the spacebounded prefix complexity.
[20] Prove an invariance theorem for CDt,s.
[20] Define Kt,s and KDt,s based on self-delimiting machines, Example 7.1.1, and prove the invariance theorems for them.
[42] Acyclic edge coloring is the edge-coloring variant of acyclic coloring, an edge coloring for which every two color classes form an acyclic subgraph (that is, a forest). The acyclic chromatic index of a graph G, denoted by a(G) is the smallest number of colors needed to have a proper acyclic
[40] Let s1 ...sn be a string over an alphabet of cardinality c.A monochromatic arithmetic progression (m.a.p.) of length k is a subsequence sisi+tsi+2t ...si+(k−1)t with all characters equal. The van der Waerden number w(k;c) is the least number n such that every string of length n contains a
[O43] Prove the whole Lov´asz Local Lemma using the incompressibility method.Comments. Theorem 6.13.1 proves a restricted version. All of the whole Lov´asz Local Lemma is proved in [R.A. Moser, G. Tardos J. Assoc.Comp. Mach., 57:2(2010), Article No 11] using the earlier arguments of R.A. Moser
[15] Let φ be a CNF formula over variables X1,...,Xn, containing n clauses, and with at least k literals in each clause, and with each variable Xi appearing in at most 2k−d clauses where d is a large enough constant. Then, φ is satisfiable and the exact running time of the above algorithm is at
[34] In Theorem 6.11.1 it was shown that for a deterministic protocol of say, complexity O(1), to compute the identity function Alice and Bob need to exchange about C(y) bits, even if the required information C(y|x) is much less than C(y). Show that for randomized protocols the communication
[33] Let the protocol-independent communication complexity PCC(x, y|C(P) ≤ i) stand for the minimum CCP (x, y) over all partial deterministic protocols P of complexity at most i computing f correctly on input (x, y) (on other inputs P may output incorrect results or not halt). Trivially, PCC(x,
[37] We continue Exercise 6.11.3.Let hx(i) be the structure function as in Definition 5.5.6 on page 413. Define, with P a protocol that computes the identity function I, the protocol-size function py(j) = min{i : TCC(x, y|C(P) ≤ i, P is one-way ) ≤ j}. The function py(j) gives the minimal
[35] Define the protocol-independent communication complexity TCC(x, y|C(P) ≤ i), of computing a function f(x, y), as the minimum CC(x, y|P) over all deterministic total protocols P computing f(x, y) for all pairs (x, y) (l(x) = l(y) = n) with C(P) ≤ i. For example, TCC(x, y|C(P) ≤ n + O(1))
[24] Let f be the equality function, with f(x, y) = 1 if x = y and 0 otherwise. Show that for every deterministic protocol P computingf, we have CC(x, x|P) ≥ C(x|P)− O(1) for all x, y. On the other hand, there is a P of complexity O(1) such that there are x, y (x = y) with C(x|P), C(y|P) ≥ n
[24] Assume that a function f : {0, 1}n × {0, 1}n → {0, 1}satisfies C(f|n) ≥ 22n − n: the truth table describing the outcomes of f for the 2n possible inputs x (the rows) and the 2n possible inputs for y(the columns) has high Kolmogorov complexity. If we flip the truth table for a
[36] Computing the minimum index: Modify the PRAM model as follows. We now have n processors P(1),...,P(n) and only one shared memory cell, c(1). Each processor knows one input bit. If several processors attempt to write into c(1) at the same time, then they must all write the same data; otherwise
[26] A function f(x1,...,xn) is called invertible if for each i, argument xi can be computed from {x1,...,xn}−{xi} and f(x1,...,xn).Use the PRAM model with q processors defined in this section. Show that it requires Ω(min{log(b(n)/ log q), log n}) time to compute any invertible function
[15] Consider the following ‘proof’ for Exercise 6.10.20 without using Kolmogorov complexity: Assume that a PRAM M adds n numbers in o(log n) time. Take any input x1,...,xn. Then there is an input xk that is ‘not useful’ as in the hint in Exercise 6.10.20.If we change xk to xk + 1, then the
[30] A parallel random access machine (PRAM), also called a ‘concurrent-read and concurrent-write priority PRAM,’ consists of a finite number of processors, each with an infinite local memory and infinite computing power, indexed as P(1), P(2), P(3),..., and an infinite number of shared memory
[30] Prove that if the number of states is fixed, then a 1-tape nondeterministic Turing machine with no separate input tape (with only one read/write two-way tape) can accept more sets within time bound a2na than within a1na, for 0 < a1 < a2 and 1
[38] Consider the machine models in Exercise 6.10.13, Item(c). All machines have one multidimensional tape with one head.(a) Show that an l-dimensional machine running in time T can be simulated by a probabilistic k-dimensional machine running in time O(T r(log T )1/k), where r =1+1/k − 1/l.(b)
[37] A log-cost random access machine (log-cost RAM) has the following components: an infinite number of registers each capable of holding an integer and a finite sequence of labeled instructions including ‘output,’ ‘branch,’ ‘load/store,’ ‘add/subtract between two registers.’The
[38] A tree work tape is a complete, infinite, rooted binary tree used as storage medium (instead of a linear tape). A work tape head starts at the root and can in each step move to the direct ancestor of the currently scanned node (if it is not the root) or to either one of the direct descendants.
[48] As in Exercise 6.10.14, consider the Turing machine model of Exercise 6.10.13 but this time with one-dimensional tapes. Show that a Turing machine with two single-head one-dimensional tapes cannot recognize the set {x2x : x ∈ {0, 1}∗ and x is a prefix of x} in real time, although it can
[40] Consider the machine model in Exercise 6.10.13, except that the work tapes are two-dimensional. Such a machine works in real time if at each step it reads a new input symbol and is online. (Then it processes and decides each initial m-length segment in precisely m steps.) Show that for such
[40] Consider an online deterministic Turing machine with a one-way input tape, some work tapes/pushdown stores and a oneway output tape. The result of computation is written on the output tape. ‘Online simulation’ means that after reading a new input symbol the simulating machine must write
[O44] Obtain a tight bound for simulating two work tapes by one work tape for Turing machines with a two-way input tape.Comments. W. Maass, G. Schnitger, E. Szemer´edi, and G. Turan, [Computational Complexity, 3(1993), 392–401] proved (not using Kolmogorov complexity) the following: Let L = {A#B
[38] Show that it takes Θ(n5/4) time to transpose a Boolean matrix on a Turing machine with a two-way read-only input tape, a work tape, and a one-way write-only output tape. That is, the input is a√n×√n matrix A that is initially given on the input tape in row-major order. The Turing machine
[37] We analyze the speed of copying strings for Turing machines with a two-way input tape and one or more work tapes.(a) Show that such a Turing machine with one work tape can copy a string of length s, initially positioned on the work tape, to a work tape segment that is d tape cells removed from
[38] Consider the stronger offline deterministic Turing machine model with a two-way read-only input tape. Given an l × l matrix A, with l = n/ log n and element size O(log n), arranged in row-major order on the two-way (one-dimensional) input tape,(a) Show that one can transpose A (that is,
[43] Use the terminology of Exercise 6.10.7, with one-way input understood.(a) Show that simulating a linear-time deterministic 2-queue machine by a deterministic 1-queue machine takes Ω(n2) time.(b) Show that simulating a linear-time deterministic 2-queue machine by a nondeterministic 1-queue
[46] A k-queue machine is similar to a k-tape Turing machine with one-way input except with the k work tapes replaced by k work queues. A queue is a first-in last-out (FIFO) device. Prove (with one-way input understood):(a) Simulating a linear-time 1-queue machine by a 1-tape Turing machine
[O47] Does simulating a linear-time 2-tape deterministic Turing machine with one-way input by a 1-tape nondeterministic Turing machine with one-way input require Ω(n2) time?
[46] Prove that simulating a linear-time 2-tape deterministic Turing machine with one-way input by a 1-tape nondeterministic Turing machine with one-way input requires Ω(n2/ log(k) n) time, for any k, where log(k) = log log ··· log is the k-fold iterated logarithm. This improves the result in
Showing 400 - 500
of 3340
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
Last
Step by Step Answers