1 Introduction
Repetitiveness measures for strings is an important topic in the field of string compression and indexing. Compared to traditional entropybased measures, measures based on dictionary compression are known to better capture the repetitiveness in highly repetitive string collections [12]. Some well known examples of dictionarycompressionbased measures are: the size of the runlength Burrows–Wheeler transform [2] (RLBWT), the size of the LempelZiv 77 factorization [17], the size of the smallest bidirectional (or macro) scheme [15].
Kempa and Prezza introduced the notion of string attractors [4], which gave a unifying view of dictionarycompressionbased measures. A string attractor of a string is a set of positions such that any substring of the string has at least one occurrence which contains a position in the set. The size of the smallest string attractor of a word is a lower bound on the size of all known dictionary compression measures, but is NPhard to compute. Kociumaka et al. [5, 6] introduced another measure that is computable in linear time, defined as the maximum over all integers , the number of distinct substrings of length in the string divided by .
The landscape of the relations between these measures has been a focus of attention. For example, since is a special case of a bidirectional scheme, . Also, [13] and [3] hold, where is the length of the string. Notice that a string can be represented in space (with an extra factor of for bits) proportional to , , or . Interestingly, while and do not give a direct representation of the string, it is known that the string can be represented in or space, respectively [4, 5, 6]. On the other hand, Kociumaka et al. [5, 6] showed that for every length and integer , there exists a family of length strings having the same measure , that requires bits to be encoded. Analogous results for are not yet known [5, 6, 12]. The bidirectional scheme is the most powerful among the dictionarycompressionbased measures. The size of the smallest bidirectional scheme is also known to satisfy , but again, the tightness of this bound was not known [12].
Following Mantaci et al. [8, 9], Kutsukake et al. [7] investigated repetitiveness measures on Thue–Morse words [14, 16, 11] and showed that the size of the smallest string attractor for the th Thue–Morse word is , for any . They also conjectured that the size of the smallest bidirectional scheme for the th Thue–Morse word (which has length ) is , which would imply a separation between and . Possibly due to the difficulty (NPhardness) of computing the size of the smallest bidirectional scheme of a string [15], tight bounds for have only been discovered for a very limited family of strings, most notably standard Sturmian words [10]. This was shown from the fact that the size of the RLBWT of every standard Sturmian word is , therefore implying a constant upper bound on the smallest bidirectional scheme.
In this paper, we prove Kutsukake et al.’s conjecture by showing that for any , the size of the smallest bidirectional scheme for is exactly . For any value of , we can construct a family of strings such that and is the length of the string. Our result shows for the first time the separation between and , i.e., there are string families such that .
2 Preliminaries
We consider the alphabet . A string is an element of . For any string , let denote its length, and let . Also, for any , let .
A string morphism is a function mapping strings to strings such that each character is replaced by a single string (deterministically), i.e., for any string . Let , and for any integer , let . Now let be the morphism on the binary alphabet determined by and . Then the th Thue–Morse word is , and its length is .
A list of strings is called a parsing of a string , if . Each is called a phrase. A sequence is a bidirectional scheme for , if , is a parsing of and for all , , such that if , and otherwise. We denote the size of the bidirectional scheme by . We call the source of the phrase .
If then we stipulate that , and call a ground phrase. (Consequently, there are no phrases of length one that have a source being a text position.) We denote the number of ground phrases in by . For convenience, we denote the starting position of phrase by , i.e., and for all .
A bidirectional scheme for the string defines a function over positions of , where
Let , and for any , let . It is clear that if , then it holds that . A bidirectional scheme for is valid, if there is no such that the function contains a cycle, that is, for every , there exists a such that . A valid bidirectional scheme of size for implies an word size (compressed) representation of , namely, the sequence , where if , and otherwise. Note that the string can be reconstructed from this sequence if and only if is valid. A parsing of is valid if there exists a list of phrase sources such that is a valid bidirectional scheme for .
Informally, gives the position (source) from where we want to copy the character that restores when reconstructing from the compressed representation, where indicates that the character is stored as a ground phrase, i.e., as a literal.
It is easy to see that a valid bidirectional scheme must have at least as many ground phrases as there are different characters appearing in (the number of ground phrases is at least if all characters of appear in ).
3 Important Characteristics of Thue–Morse Words
Before proving our bounds, we first give some simple observations on Thue–Morse words that we will use later. Remember that the first index of is 0, which is an even position.
Lemma 1
Proof
The morphism implies that any substring of length 2 starting at an even position is either or . ∎
Lemma 2 (Theorem 2.2.3 of [1])
has no overlapping factors, i.e., two occurrences of the same string in never share a common position.
Lemma 3
and only occur at even positions in .
Proof
Let the parity of an integer be .
Lemma 4
For any substring of , the parities of all occurrences of in are the same.
Proof
Further, we use that is a prefix of and for .
4 Upper and Lower Bounds on
We start with the upper bound on the smallest size of a (valid) bidirectional parsing by constructing such a parsing, and subsequently show that this bound is optimal by showing a lower bound whose proof is more involved.
4.1 Upper Bound
Theorem 4.1 (Upper bound)
For , there exists a valid bidirectional scheme for of size .
Proof
Proof by induction. For it is clear that there is a valid bidirectional scheme of size .
Suppose that for some , there is a valid bidirectional scheme of size for . We can assume that there are at least two ground phrases and . Since , we first consider a bidirectional scheme for where each phrase is constructed from phrases of by applying , with the small exception for the two ground phrases. More precisely, the phrases of are for , and two ground phrases from each of and , resulting in a parsing of size . For each nonground phrase in , we can either choose the source to be (i) or if its length is , or (ii) otherwise. The latter is because . The validity of follows from the validity of , and has no cycles. It is easy to see that for any position , the parities of and are the same (unless ). Thus, noticing that , (1) the source of at an odd position can eventually be traced to the ground phrase at position , and (2) the source of at an even position can eventually be traced to the ground phrase at position .
Next, we modify by combining the two consecutive ground phrases and corresponding to , and replace them with a single . This results in a bidirectional scheme of size . From the above observations (1) and (2), it is clear that is still valid. Thus, is a valid bidirectional scheme for of size , thereby proving the theorem. ∎
4.2 Lower Bound
Theorem 4.2 (Lower Bound)
For , the smallest valid bidirectional scheme for has size .
To prove Theorem 4.2, we would like to, in essence, do the opposite of what we did in the proof of Theorem 4.1, and show that we can construct a bidirectional scheme for of size , given a bidirectional scheme for of size . However, the opposite direction involves halving the size of phrases, and thus does not work straightforwardly. Nevertheless, we will show that this can be done in an amortized way, and show the following.
Lemma 5
For any , if there exists a valid bidirectional scheme of size for , then, for some , there exists a valid bidirectional scheme of size at most for .
Since the size of the smallest bidirectional scheme for , , can be confirmed to be respectively by computer analysis, this with Lemma 5 implies Theorem 4.2.
In the rest of the section, we give an algorithm that, given a bidirectional scheme for , constructs a bidirectional scheme for , and claim that applying the algorithm repeatedly times, for some , we obtain a bidirectional scheme for such that . The algorithm consists of 3 main steps:

Elimination of length ground phrases.

Elimination of odd length phrases.

Application of the inverse morphism on all phrases of the modified parsing.
The goal of Steps 1 and 2 is to modify the phrases of to construct a bidirectional scheme so that all phrases in will be of even length. When modifying the phrases, we must take care in 1) defining the source of the phrase, and 2) ensuring that no cycles are introduced in the resulting bidirectional scheme . To make this clear, we temporarily relax the definition for ground phrases in during the modification, so that the ground phrases of are phrases of length that start at even positions. In this way, we can be sure that any position in a length phrase starting at an even position in is not involved in a cycle. In Step 3, we create a new bidirectional scheme of by translating all phrase lengths and sources of according to the inverse morphism , i.e., we map each nonground phrase of to the phrase in . The length ground phrases in become length ground phrases in , and thus we obtain a valid bidirectional scheme for , without the relaxation, and the same size as .
4.2.1 Eliminating Length Ground Phrases
The operation is done analogously and symmetrically for any length ground phrase ( or ) that may occur at an even or odd position. We describe in detail the case for a ground phrase with character that occurs at some odd position .
For a consecutive pair of positions , we call one a partner of the other. Let be the partner position of the length ground phrase , i.e., . The idea is to (re)move the phrase boundary that separates partner positions so that the ground phrase disappears. Since we are considering the case where the ground phrase is at an odd position, we extend the phrase containing position by one character, so that it includes the length ground phrase , thereby eliminating it. If possible, we would like to keep the source of the extended phrase the same, i.e., change to , or equivalently, change to . Note that if the parity of is equal to that of , this is always possible (i.e., always holds). However, it may be that the position gets involved in a cycle, due to this change. Notice that since we started from a valid (relaxed) bidirectional scheme, it is guaranteed that is not involved in a cycle, i.e., for any . Therefore, we further modify the phrase boundaries, if necessary, to ensure that the source of will belong in the same phrase as the source of . This is repeated until we are sure that all these changes made to eliminate the original length ground phrase do not introduce any cycles in the final bidirectional scheme. In other words, we ensure, for some sufficiently large , for all . Then, from the acyclicity of position , the acyclicity of position follows.
There are six cases where the process terminates, as shown in Figure 1 (Case 3 is further divided into two subcases). As noted above, as long as the parity of is the same as that of , the character of ’s partner is always , and we can ensure that and are in the same phrase by only (possibly) setting . Thus, we consider the cases where is the smallest integer such that the parities of and differ, in which case, Lemma 4 implies that is contained in a phrase in . Each of the six cases corresponds to a distinct occurrence of in the strings of this set. We show that in each case, we can modify the phrases so that both and are in the same length2 phrase, i.e., a relaxed ground phrase, and be sure that will not be involved in a cycle in the final bidirectional scheme. The details of each case are described in Figure 1.
Although Cases 1, 2, 4 introduce a new length ground phrase, the number of phrase boundaries that separate partner positions always decreases at the starting point, and never increases. Therefore the whole process terminates at some point, at which point, all length ground phrases have been eliminated.
4.2.2 Eliminating Odd Length Phrases
In this step, we eliminate all remaining phrases with odd lengths. Since there are no more length ground phrases, we first focus on removing phrases and of length . Below, we describe the operation for removing a phrase that starts at an odd position. The other cases are analogous or symmetric.
Starting with an occurrence of phrase that starts at an odd position , we know that this phrase is preceded by . We move the phrase boundary that separates partner positions, so that the length phrase shrinks to a length phrase starting at an even position, i.e., a relaxed ground phrase, in this case, by expanding the phrase to its left. Since we have changed the source of the at position , we ensure that for some sufficiently large , for all , as we did for the elimination of length ground phrases, so that is not involved in a cycle.
There are five cases where the process terminates, as shown in Figure 2. As noted previously, as long as the parity of is the same as that of , then the character of ’s partner is always , and we can ensure that and are in the same phrase by only (possibly) setting . Thus, we consider the cases where is the smallest integer such that the parities of and differ, in which case, Lemma 4 and the previous step implies that is contained in a phrase in . Each of the five cases corresponds to a distinct occurrence of in strings of this set. The details of each case are described in Figure 2.
After eliminating all phrases and of length , all remaining phrases are either of length or do not belong to the set . Therefore, we can move all phrase boundaries that separate partner positions to the right (or all of them to the left) and update the sources accordingly without introducing cycles, since length phrases starting at odd positions become relaxed ground phrases, and the occurrences of each of the other phrases have the same parity due to Lemma 4. Thus, we now have a valid bidirectional scheme where all phrases are of even length, and length phrases are considered to be relaxed ground phrases.
4.2.3 Analysis of the Number of Phrases
It is easy to see that Steps 2 and 3 do not increase the number of phrases. Also, Step 2 does not decrease the number of length phrases that start at even positions, i.e., relaxed ground phrases, created in Step 1, which will become ground phrases in . Thus, we focus on the analysis of Step 1.
Examining each case of Fig. 1, we can see that while at the start we eliminate a length ground phrase and decrease the number of phrases, Cases 1, 2, 31, and 4 introduce a new phrase, thus do not change the total number of phrases. Also, notice that in Case 6, two ground phrases are eliminated, while the total number of phrases decreases only by one, since the second length ground phrase is expanded. Case 31 can occur in total at most twice, once for consecutive phrases of and once for consecutive phrases of . Thus, we obtain the following inequality:
(1) 
5 Conclusion
We have shown that for any , the size of the smallest bidirectional scheme for the th Thue–Morse word is exactly . From the result that the smallest string attractor of is for any [7] and that , we have shown that Thue–Morse words are an example of a family of strings in which each string has as the size of its smallest bidirectional parsing, where is the size of its smallest string attractor, and is its length. Note that we can generalize this to hold for any : Given a , concatenate copies of , each using distinct letters from a different binary alphabet. Finally, we add more distinct characters to make the smallest string attractor of the resulting string exactly . We thus can obtain a string of length with .
Our result shows for the first time the separation between and , i.e., there are string families such that . Whether this can be achieved by a family of binary strings is not yet known. Although it is still open whether bits is enough to represent any string of length , it seems not possible by dictionary compression, i.e., copy/pasting within the string.
References
 [1] Berstel, J., Reutenauer, C.: Squarefree words and idempotent semigroups. In: Lothaire, M. (ed.) Combinatorics on Words, p. 18–38. Cambridge Mathematical Library, Cambridge University Press, 2 edn. (1997). https://doi.org/10.1017/CBO9780511566097.005
 [2] Burrows, M., Wheeler, D.J.: A blocksorting lossless data compression algorithm. Tech. rep. (1994)
 [3] Kempa, D., Kociumaka, T.: Resolution of the burrowswheeler transform conjecture. In: 61st IEEE Annual Symposium on Foundations of Computer Science, FOCS 2020, Durham, NC, USA, November 1619, 2020. pp. 1002–1013. IEEE (2020). https://doi.org/10.1109/FOCS46700.2020.00097, https://doi.org/10.1109/FOCS46700.2020.00097

[4]
Kempa, D., Prezza, N.: At the roots of dictionary compression: string attractors. In: Diakonikolas, I., Kempe, D., Henzinger, M. (eds.) Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing, STOC 2018, Los Angeles, CA, USA, June 2529, 2018. pp. 827–840. ACM (2018).
https://doi.org/10.1145/3188745.3188814, https://doi.org/10.1145/3188745.3188814  [5] Kociumaka, T., Navarro, G., Prezza, N.: Towards a definitive measure of repetitiveness. In: Kohayakawa, Y., Miyazawa, F.K. (eds.) LATIN 2020: Theoretical Informatics  14th Latin American Symposium, São Paulo, Brazil, January 58, 2021, Proceedings. Lecture Notes in Computer Science, vol. 12118, pp. 207–219. Springer (2020). https://doi.org/10.1007/9783030617929_17, https://doi.org/10.1007/9783030617929_17
 [6] Kociumaka, T., Navarro, G., Prezza, N.: Towards a definitive compressibility measure for repetitive sequences (2021)
 [7] Kutsukake, K., Matsumoto, T., Nakashima, Y., Inenaga, S., Bannai, H., Takeda, M.: On repetitiveness measures of thuemorse words. In: Boucher, C., Thankachan, S.V. (eds.) String Processing and Information Retrieval  27th International Symposium, SPIRE 2020, Orlando, FL, USA, October 1315, 2020, Proceedings. Lecture Notes in Computer Science, vol. 12303, pp. 213–220. Springer (2020). https://doi.org/10.1007/9783030592127_15, https://doi.org/10.1007/9783030592127_15
 [8] Mantaci, S., Restivo, A., Romana, G., Rosone, G., Sciortino, M.: String attractors and combinatorics on words. In: Cherubini, A., Sabadini, N., Tini, S. (eds.) Proceedings of the 20th Italian Conference on Theoretical Computer Science, ICTCS 2019, Como, Italy, September 911, 2019. CEUR Workshop Proceedings, vol. 2504, pp. 57–71. CEURWS.org (2019), http://ceurws.org/Vol2504/paper8.pdf
 [9] Mantaci, S., Restivo, A., Romana, G., Rosone, G., Sciortino, M.: A combinatorial view on string attractors. Theor. Comput. Sci. 850, 236–248 (2021). https://doi.org/10.1016/j.tcs.2020.11.006, https://doi.org/10.1016/j.tcs.2020.11.006
 [10] Mantaci, S., Restivo, A., Sciortino, M.: BurrowsWheeler transform and Sturmian words. Inf. Process. Lett. 86(5), 241–246 (2003), https://doi.org/10.1016/S00200190(02)005124
 [11] Morse, M.: Recurrent geodesics on a surface of negative curvature. Trans. Am. Math. Soc. 22, 84–100 (1921)
 [12] Navarro, G.: Indexing highly repetitive string collections, part i: Repetitiveness measures. ACM Comput. Surv. 54(2) (Mar 2021). https://doi.org/10.1145/3434399, https://doi.org/10.1145/3434399
 [13] Navarro, G., Ochoa, C., Prezza, N.: On the approximation ratio of ordered parsings. IEEE Trans. Inf. Theory 67(2), 1008–1026 (2021). https://doi.org/10.1109/TIT.2020.3042746, https://doi.org/10.1109/TIT.2020.3042746
 [14] Prouhet, E.: Mémoire sur quelques relations entre les puissances des nombres. C. R. Acad. Sci. Paris Sér. 133, 225 (1851)
 [15] Storer, J.A., Szymanski, T.G.: Data compression via textual substitution. J. ACM 29(4), 928–951 (1982), https://doi.org/10.1145/322344.322346
 [16] Thue, A.: Über unendliche zeichenreihen. Norske vid. Selsk. Skr. Mat. Nat. Kl. 7, 1–22 (1906)
 [17] Ziv, J., Lempel, A.: A universal algorithm for sequential data compression. IEEE Trans. Inf. Theory 23(3), 337–343 (1977). https://doi.org/10.1109/TIT.1977.1055714, https://doi.org/10.1109/TIT.1977.1055714
Comments
There are no comments yet.