Skip to content

Conversation

@derrickstolee
Copy link

@derrickstolee derrickstolee commented Aug 3, 2021

Here is the first attempt of rebasing the vfs-2.32.0 onto the v2.33.0-rc0 tag. There were more fixup! commits than usual here, as well as some interesting merge conflicts with upstream, or locally:

  1. As pointed out in [Low Priority] React to ds/write-index-with-hashfile-api #389, we skip computing the trailing hash of the index in VFS for Git. Upstream changes to the index writing code (@derrickstolee's fault) use the hashfile API now, but that caused a conflict with this feature. Resolved by allowing the hashfile API to remove the hashing part and just be a buffering API.

  2. An upstream change to the loose object cache structure required rewriting how we were adding objects to the cache when downloading loose objects in the gvfs-helper.

  3. I rebased Handle Scalar enlistments without src subdirectory #402 and Make 'ort' the default merge strategy #404. Thanks @vdye for squashing in the changes from Handle Scalar enlistments without src subdirectory #402!

  4. I updated co-authorship to recognize @vdye's work in this area.

Here is the range-diff as of the rebase onto v2.33.0.windows.1:

  3:  552bf8a0d32 =   1:  c10b2fec9be reset --stdin: trim carriage return from the paths
  4:  9008a15ce6b !   2:  9008d3304fa gvfs: start by adding the -gvfs suffix to the version
    @@ GIT-VERSION-GEN
      #!/bin/sh
      
      GVF=GIT-VERSION-FILE
    --DEF_VER=v2.32.0
    -+DEF_VER=v2.32.0.vfs.0.0
    +-DEF_VER=v2.33.0
    ++DEF_VER=v2.33.0.vfs.0.0
      
      LF='
      '
  5:  fdb3a3d90e9 =   3:  a14d84082c0 gvfs: ensure that the version is based on a GVFS tag
  6:  63499fa74c9 =   4:  8a0a38e8e66 gvfs: add a GVFS-specific header file
  7:  129b9a2cd58 =   5:  b9480b5c3a4 gvfs: add the core.gvfs config setting
  8:  c2838c3cd90 !   6:  212d2ffbcdf gvfs: add the feature to skip writing the index' SHA-1
    @@ Documentation/config/core.txt: core.multiPackIndex::
      core.sparseCheckout::
      	Enable "sparse checkout" feature. See linkgit:git-sparse-checkout[1]
     
    + ## csum-file.c ##
    +@@ csum-file.c: void hashflush(struct hashfile *f)
    + 	unsigned offset = f->offset;
    + 
    + 	if (offset) {
    +-		the_hash_algo->update_fn(&f->ctx, f->buffer, offset);
    ++		if (!f->skip_hash)
    ++			the_hash_algo->update_fn(&f->ctx, f->buffer, offset);
    + 		flush(f, f->buffer, offset);
    + 		f->offset = 0;
    + 	}
    +@@ csum-file.c: int finalize_hashfile(struct hashfile *f, unsigned char *result, unsigned int fl
    + 	int fd;
    + 
    + 	hashflush(f);
    +-	the_hash_algo->final_fn(f->buffer, &f->ctx);
    ++
    ++	/*
    ++	 * If we skip the hash function, be sure to create an empty hash
    ++	 * for the results.
    ++	 */
    ++	if (f->skip_hash)
    ++		memset(f->buffer, 0, the_hash_algo->rawsz);
    ++	else
    ++		the_hash_algo->final_fn(f->buffer, &f->ctx);
    ++
    + 	if (result)
    + 		hashcpy(result, f->buffer);
    + 	if (flags & CSUM_HASH_IN_STREAM)
    +@@ csum-file.c: static struct hashfile *hashfd_internal(int fd, const char *name,
    + 	f->buffer_len = buffer_len;
    + 	f->buffer = xmalloc(buffer_len);
    + 	f->check_buffer = NULL;
    ++	f->skip_hash = 0;
    + 
    + 	return f;
    + }
    +
    + ## csum-file.h ##
    +@@ csum-file.h: struct hashfile {
    + 	size_t buffer_len;
    + 	unsigned char *buffer;
    + 	unsigned char *check_buffer;
    ++
    ++	/*
    ++	 * If set to 1, skip_hash indicates that we should
    ++	 * not actually compute the hash for this hashfile and
    ++	 * instead only use it as a buffered write.
    ++	 */
    ++	int skip_hash;
    + };
    + 
    + /* Checkpoint */
    +
      ## gvfs.h ##
     @@
       * used for GVFS functionality
    @@ read-cache.c
      #include "config.h"
      #include "diff.h"
      #include "diffcore.h"
    -@@ read-cache.c: static int ce_write_flush(git_hash_ctx *context, int fd)
    - {
    - 	unsigned int buffered = write_buffer_len;
    - 	if (buffered) {
    --		the_hash_algo->update_fn(context, write_buffer, buffered);
    -+		if (!gvfs_config_is_set(GVFS_SKIP_SHA_ON_INDEX))
    -+			the_hash_algo->update_fn(context, write_buffer,
    -+						 buffered);
    - 		if (write_in_full(fd, write_buffer, buffered) < 0)
    - 			return -1;
    - 		write_buffer_len = 0;
    -@@ read-cache.c: static int ce_flush(git_hash_ctx *context, int fd, unsigned char *hash)
    +@@ read-cache.c: static int do_write_index(struct index_state *istate, struct tempfile *tempfile,
      
    - 	if (left) {
    - 		write_buffer_len = 0;
    --		the_hash_algo->update_fn(context, write_buffer, left);
    -+		if (!gvfs_config_is_set(GVFS_SKIP_SHA_ON_INDEX))
    -+			the_hash_algo->update_fn(context, write_buffer, left);
    - 	}
    + 	f = hashfd(tempfile->fd, tempfile->filename.buf);
      
    - 	/* Flush first if not enough space for hash signature */
    -@@ read-cache.c: static int ce_flush(git_hash_ctx *context, int fd, unsigned char *hash)
    - 	}
    - 
    - 	/* Append the hash signature at the end */
    --	the_hash_algo->final_fn(write_buffer + left, context);
    -+	if (!gvfs_config_is_set(GVFS_SKIP_SHA_ON_INDEX))
    -+		the_hash_algo->final_fn(write_buffer + left, context);
    - 	hashcpy(hash, write_buffer + left);
    - 	left += the_hash_algo->rawsz;
    - 	return (write_in_full(fd, write_buffer, left) < 0) ? -1 : 0;
    ++	if (gvfs_config_is_set(GVFS_SKIP_SHA_ON_INDEX))
    ++		f->skip_hash = 1;
    ++
    + 	for (i = removed = extended = 0; i < entries; i++) {
    + 		if (cache[i]->ce_flags & CE_REMOVE)
    + 			removed++;
     
      ## t/t1016-read-tree-skip-sha-on-read.sh (new) ##
     @@
  9:  47431ed9352 =   7:  0e93fb49911 gvfs: add the feature that blobs may be missing
 10:  e11b8843821 =   8:  e3cdd6b3cfb gvfs: prevent files to be deleted outside the sparse checkout
 11:  1d4fe7adc01 =   9:  efb27717272 gvfs: optionally skip reachability checks/upload pack during fetch
 12:  fbfd65a8faf =  10:  113cb12f4b7 gvfs: ensure all filters and EOL conversions are blocked
 13:  2c69e9aaa00 =  11:  ef73875a01c Add a new run_hook_strvec() function
 14:  d8175773ec5 =  12:  0fec91c97fe gvfs: allow "virtualizing" objects
 15:  88daa2a4bf0 =  13:  e2e44f6819b Hydrate missing loose objects in check_and_freshen()
 16:  45d85144291 =  14:  2d27bef90c8 Add support for read-object as a background process to retrieve missing objects
 17:  d2c5990c4bc =  15:  a2033160343 sha1_file: when writing objects, skip the read_object_hook
 18:  f7eb10d150f =  16:  f587d208e21 gvfs: add global command pre and post hook procs
 19:  c27e0320f83 =  17:  7794cbf40a2 t0400: verify that the hook is called correctly from a subdirectory
 20:  b7060e9e1cf =  18:  5913efa29ff Pass PID of git process to hooks.
 21:  aee3cf96943 =  19:  d24f0afd590 pre-command: always respect core.hooksPath
 22:  825094e22cb =  20:  a545a89c332 sparse-checkout: update files with a modify/delete conflict
 23:  2a8011f8ee1 =  21:  044d9fdaebf sparse-checkout: avoid writing entries with the skip-worktree bit
 24:  61da5b640f6 =  22:  6b8a074edfe Fix reset when using the sparse-checkout feature.
 25:  a18de53a593 =  23:  fd5e6277493 Do not remove files outside the sparse-checkout
 26:  07243b03b4d !  24:  32324f2b024 gvfs: refactor loading the core.gvfs config value
    @@ gvfs.c (new)
     +
     +int gvfs_config_is_set(int mask)
     +{
    -+	gvfs_load_config_value(0);
    ++	gvfs_load_config_value(NULL);
     +	return (core_gvfs & mask) == mask;
     +}
     
 27:  c2edd39ed7a =  25:  1ac2caaa32d send-pack: do not check for sha1 file when GVFS_MISSING_OK set
 28:  5d0b6f6fe8e =  26:  c315007d3eb cache-tree: remove use of strbuf_addf in update_one
 29:  cd7a5e4c065 =  27:  babf5e7e95a gvfs: block unsupported commands when running in a GVFS repo
 30:  4b3349e5e5b !  28:  41a841efe63 gvfs: allow overriding core.gvfs
    @@ gvfs.c: void gvfs_load_config_value(const char *value)
      
      int gvfs_config_is_set(int mask)
      {
    --	gvfs_load_config_value(0);
    +-	gvfs_load_config_value(NULL);
     +	if (!gvfs_config_loaded)
    -+		gvfs_load_config_value(0);
    ++		gvfs_load_config_value(NULL);
     +
     +	gvfs_config_loaded = 1;
      	return (core_gvfs & mask) == mask;
 31:  a6328b7c1e4 =  29:  7efb232752a BRANCHES.md: Add explanation of branches and using forks
 32:  75ae3b2163c !  30:  cfea3bbe86c Add virtual file system settings and hook proc
    @@ config.c: static int git_default_core_config(const char *var, const char *value,
      		return 0;
      	}
      
    -@@ config.c: int repo_config_get_fsmonitor(struct repository *r)
    - 	return 0;
    +@@ config.c: int git_config_get_max_percent_split_change(void)
    + 	return -1; /* default value */
      }
      
     +int git_config_get_virtualfilesystem(void)
    @@ config.c: int repo_config_get_fsmonitor(struct repository *r)
      	int is_bool, val;
     
      ## config.h ##
    -@@ config.h: int git_config_get_untracked_cache(void);
    +@@ config.h: int git_config_get_index_threads(int *dest);
    + int git_config_get_untracked_cache(void);
      int git_config_get_split_index(void);
      int git_config_get_max_percent_split_change(void);
    - int repo_config_get_fsmonitor(struct repository *r);
     +int git_config_get_virtualfilesystem(void);
      
      /* This dies if the configured or default date is in the future */
    @@ dir.c
      #include "object-store.h"
     @@ dir.c: enum pattern_match_result path_matches_pattern_list(
      	int result = NOT_MATCHED;
    - 	const char *slash_pos;
    + 	size_t slash_pos;
      
     +	/*
     +	 * The virtual file system data is used to prevent git from traversing
    @@ wt-status.c: static void show_sparse_checkout_in_use(struct wt_status *s,
     +	if (core_virtualfilesystem)
     +		return;
      
    - 	status_printf_ln(s, color,
    - 			 _("You are in a sparse checkout with %d%% of tracked files present."),
    + 	if (s->state.sparse_checkout_percentage == SPARSE_CHECKOUT_SPARSE_INDEX)
    + 		status_printf_ln(s, color, _("You are in a sparse checkout."));
 33:  75412d39563 =  31:  2bc1e53080e Update the virtualfilesystem support
 34:  54dfe15f6ea =  32:  d69bd0bccc1 virtualfilesystem: don't run the virtual file system hook if the index has been redirected
 35:  f17c590b396 =  33:  81f0bd58d19 virtualfilesystem: fix bug with symlinks being ignored
 36:  03c0075da60 =  34:  95f3217d95c virtualfilesystem: check if directory is included
 37:  29dfe882187 =  35:  7495c9edf68 vfs: fix case where directories not handled correctly
 38:  0f692ce8c53 =  36:  c4bc864cf14 backwards-compatibility: support the post-indexchanged hook
 39:  b95552117aa =  37:  7d85dff7074 status: add status serialization mechanism
 40:  c792ab86bfa =  38:  43250d6a5d1 Teach ahead-behind and serialized status to play nicely together
 41:  9fd24dfa454 =  39:  57e1cd9777c status: serialize to path
 42:  535a460afa7 =  40:  d3b031cde4d status: reject deserialize in V2 and conflicts
 43:  33da5496154 =  41:  87163867225 status: fix rename reporting when using serialization cache
 44:  79add54f773 =  42:  6147f6f04a8 serialize-status: serialize global and repo-local exclude file metadata
 45:  ba76a847f33 =  43:  7a6f3b2c09d status: deserialization wait
 46:  b7755a06766 =  44:  82d5d785655 merge-recursive: avoid confusing logic in was_dirty()
 47:  2da97529ad8 =  45:  25ef5872bb4 merge-recursive: add some defensive coding to was_dirty()
 48:  439ad601489 =  46:  275e5496f1b merge-recursive: teach was_dirty() about the virtualfilesystem
 49:  252e158ec5e =  47:  4292c7262e7 status: deserialize with -uno does not print correct hint
 50:  88124dea10c =  48:  2d6458bab00 wt-status-deserialize: fix crash when -v is used
 51:  726db9dd9eb =  49:  c5af1101e68 fsmonitor: check CE_FSMONITOR_VALID in ce_uptodate
 52:  1a2b488ebb2 =  50:  d656e943cd5 fsmonitor: add script for debugging and update script for tests
 53:  2a34ed5d9c1 =  51:  5aabd28d77e status: disable deserialize when verbose output requested.
 54:  2915f9f0bb8 =  52:  d283cf35b00 t7524: add test for verbose status deserialzation
 55:  1b2d571bfe2 =  53:  bd1ec282292 deserialize-status: silently fallback if we cannot read cache file
 56:  906c27b2448 =  54:  783ad10a837 gvfs:trace2:data: add trace2 tracing around read_object_process
 57:  e22555e9c5a =  55:  ec71cdec7ed gvfs:trace2:data: status deserialization information
 58:  d5b2c0278ef =  56:  bbebe4fa79f gvfs:trace2:data: status serialization
 59:  d9596bfa851 =  57:  14f917c90fc gvfs:trace2:data: add vfs stats
 60:  b22caffd670 =  58:  d1c8b3a815f trace2: refactor setting process starting time
 61:  e734e2d6c91 =  59:  b747955aeb3 trace2:gvfs:experiment: clear_ce_flags_1
 62:  fb3b5ca3bed =  60:  e7d9a1891b5 trace2:gvfs:experiment: report_tracking
 63:  474aaf79d55 =  61:  ccf2ce5f9d2 trace2:gvfs:experiment: read_cache: annotate thread usage in read-cache
 64:  f0b0364dd13 !  62:  66bcd458167 trace2:gvfs:experiment: read-cache: time read/write of cache-tree extension
    @@ read-cache.c: static int do_write_index(struct index_state *istate, struct tempf
      
     +		trace2_region_enter("index", "write/extension/cache_tree", NULL);
      		cache_tree_write(&sb, istate->cache_tree);
    - 		err = write_index_ext_header(&c, &eoie_c, newfd, CACHE_EXT_TREE, sb.len) < 0
    - 			|| ce_write(&c, newfd, sb.buf, sb.len) < 0;
    + 		err = write_index_ext_header(f, eoie_c, CACHE_EXT_TREE, sb.len) < 0;
    + 		hashwrite(f, sb.buf, sb.len);
     +		trace2_data_intmax("index", NULL, "write/extension/cache_tree/bytes", (intmax_t)sb.len);
     +		trace2_region_leave("index", "write/extension/cache_tree", NULL);
     +
 65:  97f6957e7c3 =  63:  60c3ee64672 cache-tree: use `r` instead of `the_repository` in Trace2
 66:  09912a38a27 =  64:  4d9c8acb0a5 trace2:gvfs:experiment: add region to apply_virtualfilesystem()
 67:  290ba4839ae =  65:  da7ea1fdd4e trace2:gvfs:experiment: add region around unpack_trees()
 68:  5239c74d469 !  66:  fb7bd309919 trace2:gvfs:experiment: add region to cache_tree_fully_valid()
    @@ cache-tree.c: int cache_tree_fully_valid(struct cache_tree *it)
      			return 0;
      	}
      	return 1;
    +@@ cache-tree.c: static int must_check_existence(const struct cache_entry *ce)
    + 	return !(has_promisor_remote() && ce_skip_worktree(ce));
      }
      
     +int cache_tree_fully_valid(struct cache_tree *it)
 69:  e63dd9ac5e8 =  67:  5c5df17a629 trace2:gvfs:experiment: add unpack_entry() counter to unpack_trees() and report_tracking()
 70:  85366aae522 =  68:  79af3d02087 trace2:gvfs:experiment: increase default event depth for unpack-tree data
 71:  d1c6fa551fc !  69:  7903fc45771 trace2:gvfs:experiment: add data for check_updates() in unpack_trees()
    @@ unpack-trees.c: static int check_updates(struct unpack_trees_options *o,
      		}
      	}
      
    -@@ unpack-trees.c: static int check_updates(struct unpack_trees_options *o,
    - 		}
    - 		promisor_remote_get_direct(the_repository,
    - 					   to_fetch.oid, to_fetch.nr);
    -+		sum_prefetch = to_fetch.nr;
    - 		oid_array_clear(&to_fetch);
    - 	}
    - 
     @@ unpack-trees.c: static int check_updates(struct unpack_trees_options *o,
      
      			if (last_pc_queue_size == pc_queue_size())
 72:  91209e591b0 !  70:  1691c794921 Trace2:gvfs:experiment: capture more 'tracking' details
    @@ remote.c: int format_tracking_info(struct branch *branch, struct strbuf *sb,
      	sti = stat_tracking_info(branch, &ours, &theirs, &full_base, 0, abf);
     +	trace2_data_intmax("tracking", NULL, "stat_tracking_info/ab_flags", abf);
     +	trace2_data_intmax("tracking", NULL, "stat_tracking_info/ab_result", sti);
    -+	if (abf == AHEAD_BEHIND_FULL) {
    ++	if (sti >= 0 && abf == AHEAD_BEHIND_FULL) {
     +	    trace2_data_intmax("tracking", NULL, "stat_tracking_info/ab_ahead", ours);
     +	    trace2_data_intmax("tracking", NULL, "stat_tracking_info/ab_behind", theirs);
     +	}
 73:  d69515115fc =  71:  6c364aa694d credential: set trace2_child_class for credential manager children
 74:  ffa3d22e9b9 =  72:  bf11180e076 sub-process: do not borrow cmd pointer from caller
 75:  b7bb9b2c7c3 !  73:  9e889c69836 sub-process: add subprocess_start_argv()
    @@ sub-process.c: int subprocess_start(struct hashmap *hashmap, struct subprocess_e
     +	process->trace2_child_class = "subprocess";
     +
     +	sq_quote_argv_pretty(&quoted, argv->v);
    -+	entry->cmd = strbuf_detach(&quoted, 0);
    ++	entry->cmd = strbuf_detach(&quoted, NULL);
     +
     +	err = start_command(process);
     +	if (err) {
 76:  c8b84767bb1 !  74:  28da8156f4e sha1-file: add function to update existing loose object cache
    @@ Commit message
         Signed-off-by: Jeff Hostetler <[email protected]>
     
      ## object-file.c ##
    -@@ object-file.c: struct oid_array *odb_loose_cache(struct object_directory *odb,
    - 	return &odb->loose_objects_cache[subdir_nr];
    +@@ object-file.c: struct oidtree *odb_loose_cache(struct object_directory *odb,
    + 	return odb->loose_objects_cache;
      }
      
     +void odb_loose_cache_add_new_oid(struct object_directory *odb,
     +				 const struct object_id *oid)
     +{
    -+	int subdir_nr = oid->hash[0];
    -+
    -+	if (subdir_nr < 0 ||
    -+	    subdir_nr >= ARRAY_SIZE(odb->loose_objects_subdir_seen))
    -+		BUG("subdir_nr out of range");
    -+
    -+	/*
    -+	 * If the looose object cache already has an oid_array covering
    -+	 * cell [xx], we assume that the cache was loaded *before* the
    -+	 * new object was created, so we just need to append our new one
    -+	 * to the existing array.
    -+	 *
    -+	 * Otherwise, cause the [xx] cell to be created by scanning the
    -+	 * directory.  And since this happens *after* our caller created
    -+	 * the loose object, we don't need to explicitly add it to the
    -+	 * array.
    -+	 *
    -+	 * TODO If the subdir has not been seen, we don't technically
    -+	 * TODO need to force load it now.  We could wait and let our
    -+	 * TODO caller (or whoever requested the missing object) cause
    -+	 * TODO try to read the xx/ object and fill the cache.
    -+	 * TODO Not sure it matters either way.
    -+	 */
    -+	if (odb->loose_objects_subdir_seen[subdir_nr])
    -+		append_loose_object(oid, NULL,
    -+				    &odb->loose_objects_cache[subdir_nr]);
    -+	else
    -+		odb_loose_cache(odb, oid);
    ++	struct oidtree *cache = odb_loose_cache(odb, oid);
    ++	append_loose_object(oid, NULL, cache);
     +}
     +
      void odb_clear_loose_cache(struct object_directory *odb)
      {
    - 	int i;
    + 	oidtree_clear(odb->loose_objects_cache);
     
      ## object-store.h ##
     @@ object-store.h: void add_to_alternates_memory(const char *dir);
    - struct oid_array *odb_loose_cache(struct object_directory *odb,
    + struct oidtree *odb_loose_cache(struct object_directory *odb,
      				  const struct object_id *oid);
      
     +/*
 77:  d56e873605c =  75:  b1a113df8e0 packfile: add install_packed_git_and_mru()
 78:  50e8d4c1709 =  76:  fa958388623 index-pack: avoid immediate object fetch while parsing packfile
 79:  dc9c3c4a980 !  77:  5c4fb0bfed2 gvfs-helper: create tool to fetch objects using the GVFS Protocol
    @@ Makefile: $(REMOTE_CURL_PRIMARY): remote-curl.o http.o http-walker.o GIT-LDFLAGS
     +		$(CURL_LIBCURL) $(EXPAT_LIBEXPAT) $(LIBS)
     +
      $(LIB_FILE): $(LIB_OBJS)
    - 	$(QUIET_AR)$(RM) $@ && $(AR) $(ARFLAGS) $@ $^
    + 	$(QUIET_AR)$(AR) $(ARFLAGS) $@ $^
      
     
      ## cache.h ##
    -@@ cache.h: extern int precomposed_unicode;
    +@@ cache.h: extern int core_gvfs;
    + extern int precomposed_unicode;
      extern int protect_hfs;
      extern int protect_ntfs;
    - extern const char *core_fsmonitor;
     +extern int core_use_gvfs_helper;
     +extern const char *gvfs_cache_server_url;
     +extern const char *gvfs_shared_cache_pathname;
    @@ contrib/buildsystems/CMakeLists.txt: if(CURL_FOUND)
     
      ## environment.c ##
     @@ environment.c: int protect_hfs = PROTECT_HFS_DEFAULT;
    + #define PROTECT_NTFS_DEFAULT 1
      #endif
      int protect_ntfs = PROTECT_NTFS_DEFAULT;
    - const char *core_fsmonitor;
     +int core_use_gvfs_helper;
     +const char *gvfs_cache_server_url;
     +const char *gvfs_shared_cache_pathname;
    @@ promisor-remote.c
      #include "promisor-remote.h"
      #include "config.h"
      #include "transport.h"
    -@@ promisor-remote.c: static int fetch_objects(const char *remote_name,
    - 		die(_("promisor-remote: unable to fork off fetch subprocess"));
    - 	child_in = xfdopen(child.in, "w");
    +@@ promisor-remote.c: struct promisor_remote *repo_promisor_remote_find(struct repository *r,
      
    -+
    - 	for (i = 0; i < oid_nr; i++) {
    - 		if (fputs(oid_to_hex(&oids[i]), child_in) < 0)
    - 			die_errno(_("promisor-remote: could not write to fetch subprocess"));
    -@@ promisor-remote.c: struct promisor_remote *promisor_remote_find(const char *remote_name)
    - 
    - int has_promisor_remote(void)
    + int repo_has_promisor_remote(struct repository *r)
      {
    --	return !!promisor_remote_find(NULL);
    -+	return core_use_gvfs_helper || !!promisor_remote_find(NULL);
    +-	return !!repo_promisor_remote_find(r, NULL);
    ++	return core_use_gvfs_helper || !!repo_promisor_remote_find(r, NULL);
      }
      
      static int remove_fetched_oids(struct repository *repo,
    @@ promisor-remote.c: int promisor_remote_get_direct(struct repository *repo,
     +		return gh_client__drain_queue(&ghc);
     +	}
      
    - 	promisor_remote_init();
    + 	promisor_remote_init(repo);
      
     
      ## t/helper/.gitignore ##
 80:  881fbd6f6e7 =  78:  99b784a741a gvfs-helper: fix race condition when creating loose object dirs
 81:  ca570fda380 !  79:  ccfe63aeee0 sha1-file: create shared-cache directory if it doesn't exist
    @@ Commit message
         Signed-off-by: Jeff Hostetler <[email protected]>
     
      ## cache.h ##
    -@@ cache.h: extern int protect_ntfs;
    - extern const char *core_fsmonitor;
    +@@ cache.h: extern int protect_hfs;
    + extern int protect_ntfs;
      extern int core_use_gvfs_helper;
      extern const char *gvfs_cache_server_url;
     -extern const char *gvfs_shared_cache_pathname;
    @@ config.c: static int git_default_gvfs_config(const char *var, const char *value)
      
     
      ## environment.c ##
    -@@ environment.c: int protect_ntfs = PROTECT_NTFS_DEFAULT;
    - const char *core_fsmonitor;
    +@@ environment.c: int protect_hfs = PROTECT_HFS_DEFAULT;
    + int protect_ntfs = PROTECT_NTFS_DEFAULT;
      int core_use_gvfs_helper;
      const char *gvfs_cache_server_url;
     -const char *gvfs_shared_cache_pathname;
    @@ object-file.c: const char *loose_object_path(struct repository *r, struct strbuf
       */
     @@ object-file.c: static int alt_odb_usable(struct raw_object_store *o,
      {
    - 	struct object_directory *odb;
    + 	int r;
      
     +	if (!strbuf_cmp(path, &gvfs_shared_cache_pathname)) {
     +		/*
 82:  e4efd993498 =  80:  925975ab29f gvfs-helper: better handling of network errors
 83:  4b4f3522880 =  81:  709ca1a3e7a gvfs-helper-client: properly update loose cache with fetched OID
 84:  e5ea2b60034 =  82:  f4f0d630a22 gvfs-helper: V2 robust retry and throttling
 85:  781af367b98 =  83:  a7c21d5500e gvfs-helper: expose gvfs/objects GET and POST semantics
 86:  e51758757fa =  84:  c8255947014 gvfs-helper: dramatically reduce progress noise
 87:  b8094ff3471 =  85:  d1bbc7cd7ea gvfs-helper-client.h: define struct object_id
 88:  c5f9a47db19 =  86:  8d6e8a338cf gvfs-helper: handle pack-file after single POST request
 89:  7e980158dfe =  87:  4d46d344cd4 test-gvfs-prococol, t5799: tests for gvfs-helper
 90:  974bd5caff1 =  88:  57738997b09 gvfs-helper: move result-list construction into install functions
 91:  56050fe7ac8 =  89:  21aa81cd76f t5799: add support for POST to return either a loose object or packfile
 92:  b782a730653 =  90:  1e5e0714020 t5799: cleanup wc-l and grep-c lines
 93:  8f67279bd22 =  91:  4d915c97fa3 gvfs-helper: verify loose objects after write
 94:  ee28d9d7711 =  92:  d241c5fcbb1 t7599: create corrupt blob test
 95:  a91c064978d !  93:  b86313008a3 gvfs-helper: add prefetch support
    @@ gvfs-helper.c: static int create_loose_pathname_in_odb(struct strbuf *buf_path,
     -	gh__response_status__zero(status);
     +	strvec_push(&ip.args, "git");
     +	strvec_push(&ip.args, "index-pack");
    -+	if (gh__cmd_opts.show_progress)
    ++
    ++	if (gh__cmd_opts.show_progress) {
     +		strvec_push(&ip.args, "-v");
    ++		ip.err = 0;
    ++	} else {
    ++		ip.err = -1;
    ++		ip.no_stderr = 1;
    ++	}
    + 
    +-	if (create_loose_pathname_in_odb(&buf_path, &params->loose_oid)) {
     +	strvec_pushl(&ip.args, "-o", temp_path_idx->buf, NULL);
     +	strvec_push(&ip.args, temp_path_pack->buf);
     +	ip.no_stdin = 1;
     +	ip.out = -1;
    -+	ip.err = -1;
    - 
    --	if (create_loose_pathname_in_odb(&buf_path, &params->loose_oid)) {
    ++
     +	if (pipe_command(&ip, NULL, 0, &ip_stdout, 0, NULL, 0)) {
     +		unlink(temp_path_pack->buf);
     +		unlink(temp_path_idx->buf);
 96:  b11f7b4067a =  94:  5bee1dcf3ea gvfs-helper: add prefetch .keep file for last packfile
 97:  ef3ad3d7dea =  95:  409f5feb55d gvfs-helper: do one read in my_copy_fd_len_tail()
 98:  f69cb251e72 =  96:  0f5c4409650 gvfs-helper: move content-type warning for prefetch packs
 99:  3d82c9348a7 =  97:  b7f020974f2 fetch: use gvfs-helper prefetch under config
100:  19db7ba566d =  98:  105d5e3d34c gvfs-helper: better support for concurrent packfile fetches
101:  7f39e48358a =  99:  627b418b220 remote-curl: do not call fetch-pack when using gvfs-helper
102:  e0544dd7f89 = 100:  d1812aad53c fetch: reprepare packs before checking connectivity
103:  304dfc51f8c = 101:  52c49487441 gvfs-helper: retry when creating temp files
104:  f3c6cbc9d42 = 102:  35cde780966 upload-pack: fix race condition in error messages
105:  9d566f44085 = 103:  186b63d93dd homebrew: add GitHub workflow to release Cask
106:  fa7ec1c6ded = 104:  08096cea23d maintenance: care about gvfs.sharedCache config
107:  5722ac72fe7 = 105:  6256dec1d40 unpack-trees:virtualfilesystem: Improve efficiency of clear_ce_flags
108:  dafa70bf727 ! 106:  2b74bbbdb40 Adding winget workflows
    @@ Metadata
      ## Commit message ##
         Adding winget workflows
     
    - ## .github/workflows/release-winget.yaml (new) ##
    + ## .github/workflows/release-winget.yml (new) ##
     @@
     +name: "release-winget"
     +on:
109:  89cce78cf1d = 107:  7ca32a5f0fb update-microsoft-git: create barebones builtin
110:  ca0e0bf80b0 = 108:  f60cfc6f1ba update-microsoft-git: Windows implementation
111:  14c9bd1d085 = 109:  6393c8b8b53 update-microsoft-git: use brew on macOS
112:  c7c3d5abf1f ! 110:  921bcb821cf Adding readme for microsoft/git
    @@ Metadata
      ## Commit message ##
         Adding readme for microsoft/git
     
    +    Signed-off-by: Derrick Stolee <[email protected]>
    +
      ## README.md ##
     @@
     -Git for Windows
    -+Microsoft Git
    - ===============
    - 
    --[![Build status](https://github.com/git-for-windows/git/workflows/CI/PR/badge.svg)](https://github.com/git-for-windows/git/actions?query=branch%3Amaster+event%3Apush)
    +-===============
    +-
    +-[![Open in Visual Studio Code](https://open.vscode.dev/badges/open-in-vscode.svg)](https://open.vscode.dev/git-for-windows/git)
    +-[![Build status](https://github.com/git-for-windows/git/workflows/CI/PR/badge.svg)](https://github.com/git-for-windows/git/actions?query=branch%3Amain+event%3Apush)
     -[![Join the chat at https://gitter.im/git-for-windows/git](https://badges.gitter.im/Join%20Chat.svg)](https://gitter.im/git-for-windows/git?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge)
    -+[![CI/PR](https://github.com/microsoft/git/actions/workflows/main.yml/badge.svg)](https://github.com/microsoft/git/actions/workflows/main.yml)
    - 
    +-
     -This is [Git for Windows](http://git-for-windows.github.io/), the Windows port
     -of [Git](http://git-scm.com/).
    -+This is Microsoft Git, a special Git distribution to support monorepo scenarios. If you are _not_ working in a monorepo, you are likely searching for [Git for Windows](http://git-for-windows.github.io/) instead of this codebase.
    - 
    +-
     -The Git for Windows project is run using a [governance
     -model](http://git-for-windows.github.io/governance-model.html). If you
     -encounter problems, you can report them as [GitHub
    @@ README.md
     -for Windows' [Google Group](http://groups.google.com/group/git-for-windows),
     -and [contribute bug
     -fixes](https://github.com/git-for-windows/git/wiki/How-to-participate).
    -+If you encounter problems with Microsoft Git, please report them as [GitHub issues](https://github.com/microsoft/git/issues).
    - 
    +-
    +-To build Git for Windows, please either install [Git for Windows'
    +-SDK](https://gitforwindows.org/#download-sdk), start its `git-bash.exe`, `cd`
    +-to your Git worktree and run `make`, or open the Git worktree as a folder in
    +-Visual Studio.
    +-
    +-To verify that your build works, use one of the following methods:
    +-
    +-- If you want to test the built executables within Git for Windows' SDK,
    +-  prepend `<worktree>/bin-wrappers` to the `PATH`.
    +-- Alternatively, run `make install` in the Git worktree.
    +-- If you need to test this in a full installer, run `sdk build
    +-  git-and-installer`.
    +-- You can also "install" Git into an existing portable Git via `make install
    +-  DESTDIR=<dir>` where `<dir>` refers to the top-level directory of the
    +-  portable Git. In this instance, you will want to prepend that portable Git's
    +-  `/cmd` directory to the `PATH`, or test by running that portable Git's
    +-  `git-bash.exe` or `git-cmd.exe`.
    +-- If you built using a recent Visual Studio, you can use the menu item
    +-  `Build>Install git` (you will want to click on `Project>CMake Settings for
    +-  Git` first, then click on `Edit JSON` and then point `installRoot` to the
    +-  `mingw64` directory of an already-unpacked portable Git).
    +-
    +-  As in the previous  bullet point, you will then prepend `/cmd` to the `PATH`
    +-  or run using the portable Git's `git-bash.exe` or `git-cmd.exe`.
    +-- If you want to run the built executables in-place, but in a CMD instead of
    +-  inside a Bash, you can run a snippet like this in the `git-bash.exe` window
    +-  where Git was built (ensure that the `EOF` line has no leading spaces), and
    +-  then paste into the CMD window what was put in the clipboard:
    +-
    +-  ```sh
    +-  clip.exe <<EOF
    +-  set GIT_EXEC_PATH=$(cygpath -aw .)
    +-  set PATH=$(cygpath -awp ".:contrib/scalar:/mingw64/bin:/usr/bin:$PATH")
    +-  set GIT_TEMPLATE_DIR=$(cygpath -aw templates/blt)
    +-  set GITPERLLIB=$(cygpath -aw perl/build/lib)
    +-  EOF
    +-  ```
    +-- If you want to run the built executables in-place, but outside of Git for
    +-  Windows' SDK, and without an option to set/override any environment
    +-  variables (e.g. in Visual Studio's debugger), you can call the Git executable
    +-  by its absolute path and use the `--exec-path` option, like so:
    +-
    +-  ```cmd
    +-  C:\git-sdk-64\usr\src\git\git.exe --exec-path=C:\git-sdk-64\usr\src\git help
    +-  ```
    +-
    +-  Note: for this to work, you have to hard-link (or copy) the `.dll` files from
    +-  the `/mingw64/bin` directory to the Git worktree, or add the `/mingw64/bin`
    +-  directory to the `PATH` somehow or other.
    +-
    +-To make sure that you are testing the correct binary, call `./git.exe version`
    +-in the Git worktree, and then call `git version` in a directory/window where
    +-you want to test Git, and verify that they refer to the same version (you may
    +-even want to pass the command-line option `--build-options` to look at the
    +-exact commit from which the Git version was built).
    +-
     -Git - fast, scalable, distributed revision control system
    -+Why is Microsoft Git needed?
    ++`microsoft/git` and the Scalar CLI
    ++==================================
    ++
    ++[![CI/PR](https://github.com/microsoft/git/actions/workflows/main.yml/badge.svg)](https://github.com/microsoft/git/actions/workflows/main.yml)
    ++
    ++This is `microsoft/git`, a special Git distribution to support monorepo scenarios. If you are _not_
    ++working in a monorepo, you are likely searching for
    ++[Git for Windows](http://git-for-windows.github.io/) instead of this codebase.
    ++
    ++In addition to the Git command-line interface (CLI), `microsoft/git` includes the Scalar CLI to
    ++further enable working with extremely large repositories. Scalar is a tool to apply the latest
    ++recommendations and use the most advanced Git features. You can read
    ++[the Scalar CLI documentation](contrib/scalar/scalar.txt) or read our
    ++[Scalar user guide](contrib/scalar/docs/index.md) including
    ++[the philosophy of Scalar](contrib/scalar/docs/philosophy.md).
    ++
    ++If you encounter problems with `microsoft/git`, please report them as
    ++[GitHub issues](https://github.com/microsoft/git/issues).
    ++
    ++Why is this fork needed?
    ++=========================================================
    ++
    ++Git is awesome - it's a fast, scalable, distributed version control system with an unusually rich
    ++command set that provides both high-level operations and full access to internals. What more could
    ++you ask for?
    ++
    ++Well, because Git is a distributed version control system, each Git repository has a copy of all
    ++files in the entire history. As large repositories, aka _monorepos_ grow, Git can struggle to
    ++manage all that data. As Git commands like `status` and `fetch` get slower, developers stop waiting
    ++and start switching context. And context switches harm developer productivity.
    ++
    ++`microsoft/git` is focused on addressing these performance woes and making the monorepo developer
    ++experience first-class. The Scalar CLI packages all of these recommendations into a simple set of
    ++commands.
    ++
    ++One major feature that Scalar recommends is [partial clone](https://github.blog/2020-12-21-get-up-to-speed-with-partial-clone-and-shallow-clone/),
    ++which reduces the amount of data transferred in order to work with a Git repository. While several
    ++services such as GitHub support partial clone, Azure Repos instead has an older version of this
    ++functionality called
    ++[the GVFS protocol](https://docs.microsoft.com/en-us/azure/devops/learn/git/gvfs-architecture#gvfs-protocol).
    ++The integration with the GVFS protocol present in `microsoft/git` is not appropriate to include in
    ++the core Git client because partial clone is the official version of that functionality.
    ++
    ++Downloading and Installing
      =========================================================
      
     -Git is a fast, scalable, distributed revision control system with an
    @@ README.md
     -[Documentation/giteveryday.txt]: Documentation/giteveryday.txt
     -[Documentation/gitcvs-migration.txt]: Documentation/gitcvs-migration.txt
     -[Documentation/SubmittingPatches]: Documentation/SubmittingPatches
    -+Git is awesome - it's a fast, scalable, distributed version control system with an unusually rich command set that provides both high-level operations and full access to internals. What more could you ask for?
    -+
    -+Well, because Git is a distributed version control system, each Git repository has a copy of all files in the entire history. As large repositories, aka _monorepos_ grow, Git can struggle to manage all that data. As Git commands like `status` and `fetch` get slower, developers stop waiting and start switching context. And context switches harm developer productivity.
    -+
    -+Microsoft Git is focused on addressing these performance woes and making the monorepo developer experience first-class. It does so in part by working with the [GVFS protocol](https://docs.microsoft.com/en-us/azure/devops/learn/git/gvfs-architecture#gvfs-protocol) to prefetch packs of commits and trees and delay downloading of associated blobs. This is required for monorepos using [VFS for Git](https://github.com/microsoft/VFSForGit/blob/master/Readme.md). Additionally, some Git hosting providers support the GVFS protocol instead of the Git-native [partial clone feature](https://github.blog/2020-12-21-get-up-to-speed-with-partial-clone-and-shallow-clone/).
    -+
    -+Downloading and Installing
    -+=========================================================
    -+
    -+If you're working in a monorepo and want to take advantage of Microsoft Git's performance boosts, you can
    -+download the latest version installer for your OS from the [Releases page](https://github.com/microsoft/git/releases). Alternatively,
    -+you can opt to install via the command line, using the below instructions for supported OSes:
    ++If you're working in a monorepo and want to take advantage of the performance boosts in
    ++`microsoft/git`, then you can download the latest version installer for your OS from the
    ++[Releases page](https://github.com/microsoft/git/releases). Alternatively, you can opt to install
    ++via the command line, using the below instructions for supported OSes:
     +
     +## Windows
    -+__Note:__ Winget is still in public preview, meaning you currently [need to take special installation steps](https://docs.microsoft.com/en-us/windows/package-manager/winget/#install-winget) (i.e. manually installing the `.appxbundle`, installing the preview version of [App Installer](https://www.microsoft.com/p/app-installer/9nblggh4nns1?ocid=9nblggh4nns1_ORSEARCH_Bing&rtc=1&activetab=pivot:overviewtab), or participating in the [Windows Insider flight ring](https://insider.windows.com/https://insider.windows.com/)).
    ++
    ++__Note:__ Winget is still in public preview, meaning you currently
    ++[need to take special installation steps](https://docs.microsoft.com/en-us/windows/package-manager/winget/#install-winget):
    ++Either manually install the `.appxbundle` available at the
    ++[preview version of App Installer](https://www.microsoft.com/p/app-installer/9nblggh4nns1?ocid=9nblggh4nns1_ORSEARCH_Bing&rtc=1&activetab=pivot:overviewtab),
    ++or participate in the
    ++[Windows Insider flight ring](https://insider.windows.com/https://insider.windows.com/)
    ++since `winget` is available by default on preview versions of Windows.
     +
     +To install with Winget, run
     +
    @@ README.md
     +winget install microsoft/git
     +```
     +
    -+To upgrade Microsoft Git, use the following Git command, which will download and install the latest release.
    ++Double-check that you have the right version by running these commands,
    ++which should have the same output:
    ++
    ++```shell
    ++git version
    ++scalar version
    ++```
    ++
    ++To upgrade `microsoft/git`, use the following Git command, which will download and install the latest
    ++release.
     +
     +```shell
     +git update-microsoft-git
     +```
     +
    -+You may also be alerted with a notification to upgrade, which presents a single-click process for running `git update-microsoft-git`.
    ++You may also be alerted with a notification to upgrade, which presents a single-click process for
    ++running `git update-microsoft-git`.
     +
     +## macOS
     +
    -+To install Microsoft Git on macOS, first [be sure that Homebrew is installed](https://brew.sh/) then install the `microsoft-git` cask with these steps:
    ++To install `microsoft/git` on macOS, first [be sure that Homebrew is installed](https://brew.sh/) then
    ++install the `microsoft-git` cask with these steps:
     +
     +```shell
     +brew tap microsoft/git
     +brew install --cask microsoft-git
     +```
     +
    -+To upgrade microsoft/git, you can run the necessary brew commands:
    ++Double-check that you have the right version by running these commands,
    ++which should have the same output:
    ++
    ++```shell
    ++git version
    ++scalar version
    ++```
    ++
    ++To upgrade microsoft/git, you can run the necessary `brew` commands:
     +
     +```shell
     +brew update
    @@ README.md
     +
     +## Linux
     +
    -+For Ubuntu/Debian distributions, `apt-get` support is coming soon. For now, though, please use the most recent [`.deb` package](https://github.com/microsoft/git/releases).
    ++For Ubuntu/Debian distributions, `apt-get` support is coming soon. For now, please use the most
    ++recent [`.deb` package](https://github.com/microsoft/git/releases). For example, you can download a
    ++specific version as follows:
     +
     +```shell
    -+wget -o microsoft-git.deb https://github.com/microsoft/git/releases/download/v2.31.1.vfs.0.1/git-vfs_2.31.1.vfs.0.1.deb
    ++wget -O microsoft-git.deb https://github.com/microsoft/git/releases/download/v2.32.0.vfs.0.2/git-vfs_2.32.0.vfs.0.2.deb
     +sudo dpkg -i microsoft-git.deb
     +```
     +
    -+For other distributions, you will need to compile and install microsoft/git from source:
    ++Double-check that you have the right version by running these commands,
    ++which should have the same output:
    ++
    ++```shell
    ++git version
    ++scalar version
    ++```
    ++
    ++For other distributions, you will need to compile and install `microsoft/git` from source:
     +
     +```shell
     +git clone https://github.com/microsoft/git microsoft-git
     +cd microsoft-git
    -+make -j12 prefix=/usr/local
    -+sudo make -j12 prefix=/usr/local install
    ++make -j12 prefix=/usr/local INCLUDE_SCALAR=YesPlease
    ++sudo make -j12 prefix=/usr/local INCLUDE_SCALAR=YesPlease install
     +```
     +
    -+For more assistance building Git from source, see [the INSTALL file in the core Git project](https://github.com/git/git/blob/master/INSTALL).
    ++For more assistance building Git from source, see
    ++[the INSTALL file in the core Git project](https://github.com/git/git/blob/master/INSTALL).
     +
     +Contributing
     +=========================================================
113:  b39873a6b7d ! 111:  8082d045f20 t1092: mark passing test as success
    @@ t/t1092-sparse-checkout-compatibility.sh: test_expect_failure 'blame with pathsp
      	test_all_match git blame deep/deeper2/deepest/a
      '
      
    --# TODO: reset currently does not behave as expected when in a
    --# sparse-checkout.
    +-# NEEDSWORK: a sparse-checkout behaves differently from a full checkout
    +-# in this scenario, but it shouldn't.
     -test_expect_failure 'checkout and reset (mixed)' '
     +# TODO: This behaves correctly in microsoft/git. Why?
     +test_expect_success 'checkout and reset (mixed)' '
120:  059c4b37955 ! 112:  4f0f6e9814b ci: run Scalar's Functional Tests
    @@ .github/workflows/scalar-functional-tests.yml (new)
     +name: Scalar Functional Tests
     +
     +env:
    -+  SCALAR_REPOSITORY: dscho/scalar
    -+  SCALAR_REF: vfs-2.32.0
    ++  SCALAR_REPOSITORY: derrickstolee/scalar
    ++  SCALAR_REF: test-scalar-in-c
     +  DEBUG_WITH_TMATE: false
    ++  SCALAR_TEST_SKIP_VSTS_INFO: true
     +
     +on:
     +  push:
    @@ .github/workflows/scalar-functional-tests.yml (new)
     +      matrix:
     +        # Order by runtime (in descending order)
     +        os: [windows-2019, macos-10.15, ubuntu-16.04, ubuntu-18.04, ubuntu-20.04]
    -+        features: [false, experimental]
    ++        # Scalar.NET used to be tested using `features: [false, experimental]`
    ++        # But currently, Scalar/C ignores `feature.scalar` altogether, so let's
    ++        # save some electrons and run only one of them...
    ++        features: [ignored]
     +        exclude:
     +          # The built-in FSMonitor is not (yet) supported on Linux
     +          - os: ubuntu-16.04
    @@ .github/workflows/scalar-functional-tests.yml (new)
     +            ;;
     +          esac
     +
    -+          $SUDO make -j5 $extra install
    ++          $SUDO make -j5 INCLUDE_SCALAR=AbsolutelyYes $extra install
     +
    -+      - name: Ensure that we use the built Git (Windows)
    -+        if: runner.os == 'Windows'
    -+        shell: powershell
    ++      - name: Ensure that we use the built Git and Scalar
    ++        shell: bash
     +        run: |
    -+          cmd /c where git
    ++          type -p git
     +          git version
    -+          if ((git version) -like "*.vfs.*") { echo Good } else { exit 1 }
    ++          case "$(git version)" in *.vfs.*) echo Good;; *) exit 1;; esac
    ++          type -p scalar
    ++          scalar version
    ++          case "$(scalar version 2>&1)" in *.vfs.*) echo Good;; *) exit 1;; esac
     +
     +      - name: Check out Scalar's source code
     +        uses: actions/checkout@v2
    @@ .github/workflows/scalar-functional-tests.yml (new)
     +          mkdir -p "$TRACE2_BASENAME/Perf"
     +          git version --build-options
     +          cd ../out
    -+          PATH="$PWD/Scalar/$BUILD_FRAGMENT:$PWD/Scalar.Service/$BUILD_FRAGMENT:$PATH"
     +          Scalar.FunctionalTests/$BUILD_FRAGMENT/Scalar.FunctionalTests$BUILD_FILE_EXT --test-scalar-on-path --test-git-on-path --timeout=300000 --full-suite
     +
     +      - name: Force-stop FSMonitor daemons and Git processes (Windows)
    -+        if: runner.os == 'Windows' && matrix.features == 'experimental' && (success() || failure())
    ++        if: runner.os == 'Windows' && (success() || failure())
     +        shell: bash
     +        run: |
     +          set -x
122:  c5c35ed9f01 ! 113:  18ee232437b Start porting `scalar.exe` to C
    @@ contrib/scalar/scalar.c (new)
     +#include "gettext.h"
     +#include "parse-options.h"
     +
    -+struct {
    ++static struct {
     +	const char *name;
     +	int (*fn)(int, const char **);
     +} builtins[] = {
123:  341fa3d2bb0 = 114:  13d82ec12a3 scalar: add a test script
124:  8a9e4a3eb37 ! 115:  e41061d4bf7 scalar register: set recommended config settings
    @@ Commit message
         Let's start implementing the `register` command. With this commit,
         recommended settings are configured upon `scalar register`.
     
    +    Co-authored-by: Victoria Dye <[email protected]>
    +    Signed-off-by: Victoria Dye <[email protected]>
         Signed-off-by: Derrick Stolee <[email protected]>
         Signed-off-by: Johannes Schindelin <[email protected]>
     
    @@ contrib/scalar/scalar.c
      #include "parse-options.h"
     +#include "config.h"
     +
    ++/*
    ++ * Remove the deepest subdirectory in the provided path string. Path must not
    ++ * include a trailing path separator. Returns 1 if parent directory found,
    ++ * otherwise 0.
    ++ */
    ++static int strbuf_parentdir(struct strbuf *buf)
    ++{
    ++	size_t len = buf->len;
    ++	size_t offset = offset_1st_component(buf->buf);
    ++	char *path_sep = find_last_dir_sep(buf->buf + offset);
    ++	strbuf_setlen(buf, path_sep ? path_sep - buf->buf : offset);
    ++
    ++	return buf->len < len;
    ++}
    ++
     +static void setup_enlistment_directory(int argc, const char **argv,
     +				       const char * const *usagestr,
    -+				       const struct option *options)
    ++				       const struct option *options,
    ++				       struct strbuf *enlistment_root)
     +{
    ++	struct strbuf path = STRBUF_INIT;
    ++	char *root;
    ++	int enlistment_found = 0;
    ++
     +	if (startup_info->have_repository)
     +		BUG("gitdir already set up?!?");
     +
     +	if (argc > 1)
     +		usage_with_options(usagestr, options);
     +
    ++	/* find the worktree, determine its corresponding root */
     +	if (argc == 1) {
    -+		char *src = xstrfmt("%s/src", argv[0]);
    -+		const char *dir = is_directory(src) ? src : argv[0];
    ++		strbuf_add_absolute_path(&path, argv[0]);
    ++	} else if (strbuf_getcwd(&path) < 0) {
    ++		die(_("need a working directory"));
    ++	}
     +
    -+		if (chdir(dir) < 0)
    -+			die_errno(_("could not switch to '%s'"), dir);
    ++	strbuf_trim_trailing_dir_sep(&path);
    ++	do {
    ++		const size_t len = path.len;
     +
    -+		free(src);
    -+	} else {
    -+		/* find the worktree, and ensure that it is named `src` */
    -+		struct strbuf path = STRBUF_INIT;
    -+
    -+		if (strbuf_getcwd(&path) < 0)
    -+			die(_("need a working directory"));
    -+
    -+		for (;;) {
    -+			size_t len = path.len;
    -+
    -+			strbuf_addstr(&path, "/src/.git");
    -+			if (is_git_directory(path.buf)) {
    -+				strbuf_setlen(&path, len);
    -+				strbuf_addstr(&path, "/src");
    -+				if (chdir(path.buf) < 0)
    -+					die_errno(_("could not switch to '%s'"),
    -+						  path.buf);
    -+				strbuf_release(&path);
    -+				break;
    -+			}
    ++		/* check if currently in enlistment root with src/ workdir */
    ++		strbuf_addstr(&path, "/src/.git");
    ++		if (is_git_directory(path.buf)) {
    ++			strbuf_strip_suffix(&path, "/.git");
     +
    -+			while (len > 0 && !is_dir_sep(path.buf[--len]))
    -+				; /* keep looking for parent directory */
    ++			if (enlistment_root)
    ++				strbuf_add(enlistment_root, path.buf, len);
     +
    -+			if (!len)
    -+				die(_("could not find enlistment root"));
    ++			enlistment_found = 1;
    ++			break;
    ++		}
     +
    ++		/* reset to original path */
    ++		strbuf_setlen(&path, len);
    ++
    ++		/* check if currently in workdir */
    ++		strbuf_addstr(&path, "/.git");
    ++		if (is_git_directory(path.buf)) {
     +			strbuf_setlen(&path, len);
    ++
    ++			if (enlistment_root) {
    ++				/*
    ++				 * If the worktree's directory's name is `src`, the enlistment is the
    ++				 * parent directory, otherwise it is identical to the worktree.
    ++				 */
    ++				root = strip_path_suffix(path.buf, "src");
    ++				strbuf_addstr(enlistment_root, root ? root : path.buf);
    ++				free(root);
    ++			}
    ++
    ++			enlistment_found = 1;
    ++			break;
     +		}
    -+	}
     +
    ++		strbuf_setlen(&path, len);
    ++	} while (strbuf_parentdir(&path));
    ++
    ++	if (!enlistment_found)
    ++		die(_("could not find enlistment root"));
    ++
    ++	if (chdir(path.buf) < 0)
    ++		die_errno(_("could not switch to '%s'"), path.buf);
    ++
    ++	strbuf_release(&path);
     +	setup_git_directory();
     +}
     +
    @@ contrib/scalar/scalar.c
     +	argc = parse_options(argc, argv, NULL, options,
     +			     usage, 0);
     +
    -+	setup_enlistment_directory(argc, argv, usage, options);
    ++	setup_enlistment_directory(argc, argv, usage, options, NULL);
     +
     +	return register_dir();
     +}
      
    - struct {
    + static struct {
      	const char *name;
      	int (*fn)(int, const char **);
      } builtins[] = {
125:  f92504fc8cd ! 116:  7eefafcfef7 scalar register/unregister: start/stop maintenance on repository
    @@ Commit message
         With this commit, `scalar register` starts those scheduled maintenance
         tasks, and `scalar unregister` stops them.
     
    +    Co-authored-by: Victoria Dye <[email protected]>
    +    Signed-off-by: Victoria Dye <[email protected]>
         Signed-off-by: Derrick Stolee <[email protected]>
         Signed-off-by: Johannes Schindelin <[email protected]>
     
    @@ contrib/scalar/scalar.c
      #include "config.h"
     +#include "run-command.h"
      
    - static void setup_enlistment_directory(int argc, const char **argv,
    - 				       const char * const *usagestr,
    + /*
    +  * Remove the deepest subdirectory in the provided path string. Path must not
     @@ contrib/scalar/scalar.c: static void setup_enlistment_directory(int argc, const char **argv,
      	setup_git_directory();
      }
    @@ contrib/scalar/scalar.c: static int cmd_register(int argc, const char **argv)
     +	argc = parse_options(argc, argv, NULL, options,
     +			     usage, 0);
     +
    -+	setup_enlistment_directory(argc, argv, usage, options);
    ++	setup_enlistment_directory(argc, argv, usage, options, NULL);
     +
     +	return unregister_dir();
     +}
     +
    - struct {
    + static struct {
      	const char *name;
      	int (*fn)(int, const char **);
      } builtins[] = {
126:  d8e544d2268 ! 117:  c0af1b66ae8 scalar: implement 'scalar list'
    @@ contrib/scalar/scalar.c: static int register_dir(void)
      }
      
      static int cmd_register(int argc, const char **argv)
    -@@ contrib/scalar/scalar.c: struct {
    +@@ contrib/scalar/scalar.c: static struct {
      	const char *name;
      	int (*fn)(int, const char **);
      } builtins[] = {
221:  419cbc55c34 ! 118:  d14cc6d0d46 fixup! scalar unregister: handle deleted enlistment directory gracefully
    @@
      ## Metadata ##
    -Author: Victoria Dye <[email protected]>
    +Author: Johannes Schindelin <[email protected]>
     
      ## Commit message ##
    -    fixup! scalar unregister: handle deleted enlistment directory gracefully
    +    scalar unregister: handle deleted enlistment directory gracefully
     
    -    Improve handling of non-`src` workdirs in `scalar unregister`
    +    When a user deleted an enlistment manually, let's be generous and
    +    _still_ unregister it.
    +
    +    Co-authored-by: Victoria Dye <[email protected]>
    +    Signed-off-by: Victoria Dye <[email protected]>
    +    Signed-off-by: Johannes Schindelin <[email protected]>
     
      ## contrib/scalar/scalar.c ##
    -@@ contrib/scalar/scalar.c: static int cmd_run(int argc, const char **argv)
    - 	return 0;
    +@@ contrib/scalar/scalar.c: static int cmd_register(int argc, const char **argv)
    + 	return register_dir();
      }
      
     +static int remove_deleted_enlistment(struct strbuf *path)
    @@ contrib/scalar/scalar.c: static int cmd_run(int argc, const char **argv)
      {
      	struct option options[] = {
     @@ contrib/scalar/scalar.c: static int cmd_unregister(int argc, const char **argv)
    - 	 * mistake and _still_ wants to unregister the thing.
    - 	 */
    - 	if (argc == 1) {
    --		struct strbuf path = STRBUF_INIT;
    -+		struct strbuf src_path = STRBUF_INIT, workdir_path = STRBUF_INIT;
    + 	argc = parse_options(argc, argv, NULL, options,
    + 			     usage, 0);
      
    --		strbuf_addf(&path, "%s/src/.git", argv[0]);
    --		if (!is_directory(path.buf)) {
    --			int res = 0;
    ++	/*
    ++	 * Be forgiving when the enlistment or worktree does not even exist any
    ++	 * longer; This can be the case if a user deleted the worktree by
    ++	 * mistake and _still_ wants to unregister the thing.
    ++	 */
    ++	if (argc == 1) {
    ++		struct strbuf src_path = STRBUF_INIT, workdir_path = STRBUF_INIT;
    ++
     +		strbuf_addf(&src_path, "%s/src/.git", argv[0]);
     +		strbuf_addf(&workdir_path, "%s/.git", argv[0]);
     +		if (!is_directory(src_path.buf) && !is_directory(workdir_path.buf)) {
     +			/* remove possible matching registrations */
     +			int res = -1;
    - 
    --			strbuf_strip_suffix(&path, "/.git");
    --			strbuf_realpath_forgiving(&path, path.buf, 1);
    ++
     +			strbuf_strip_suffix(&src_path, "/.git");
     +			res = remove_deleted_enlistment(&src_path) && res;
    - 
    --			if (run_git("config", "--global",
    --				    "--unset", "--fixed-value",
    --				    "scalar.repo", path.buf, NULL) < 0)
    --				res = -1;
    --
    --			if (run_git("config", "--global",
    --				    "--unset", "--fixed-value",
    --				    "maintenance.repo", path.buf, NULL) < 0)
    --				res = -1;
    ++
     +			strbuf_strip_suffix(&workdir_path, "/.git");
     +			res = remove_deleted_enlistment(&workdir_path) && res;
    - 
    --			strbuf_release(&path);
    ++
     +			strbuf_release(&src_path);
     +			strbuf_release(&workdir_path);
    - 			return res;
    - 		}
    --		strbuf_release(&path);
    ++			return res;
    ++		}
     +		strbuf_release(&src_path);
     +		strbuf_release(&workdir_path);
    - 	}
    - 
    ++	}
    ++
      	setup_enlistment_directory(argc, argv, usage, options, NULL);
    + 
    + 	return unregister_dir();
     
      ## contrib/scalar/t/t9099-scalar.sh ##
    -@@ contrib/scalar/t/t9099-scalar.sh: test_expect_success '`scalar clone` with GVFS-enabled server' '
    - 	)
    +@@ contrib/scalar/t/t9099-scalar.sh: test_expect_success 'scalar shows a usage' '
    + 	test_expect_code 129 scalar -h
      '
      
    ++test_expect_success 'scalar unregister' '
    ++	git init vanish/src &&
    ++	scalar register vanish/src &&
    ++	git config --get --global --fixed-value \
    ++		maintenance.repo "$(pwd)/vanish/src" &&
    ++	scalar list >scalar.repos &&
    ++	grep -F "$(pwd)/vanish/src" scalar.repos &&
    ++	rm -rf vanish/src/.git &&
    ++	scalar unregister vanish &&
    ++	test_must_fail git config --get --global --fixed-value \
    ++		maintenance.repo "$(pwd)/vanish/src" &&
    ++	scalar list >scalar.repos &&
    ++	! grep -F "$(pwd)/vanish/src" scalar.repos
    ++'
    ++
     +test_expect_success '`scalar register` & `unregister` with existing repo' '
     +	git init existing &&
     +	scalar register existing &&
128:  e7477acf549 ! 119:  ae3298d6766 scalar: implement the `clone` subcommand
    @@ contrib/scalar/scalar.c: static int unregister_dir(void)
      static int cmd_list(int argc, const char **argv)
      {
      	if (argc != 1)
    -@@ contrib/scalar/scalar.c: struct {
    +@@ contrib/scalar/scalar.c: static struct {
      	const char *name;
      	int (*fn)(int, const char **);
      } builtins[] = {
129:  0edd647e536 ! 120:  1d3cff36097 scalar: test `scalar clone`
    @@ contrib/scalar/t/t9099-scalar.sh: test_expect_success 'scalar unregister' '
     +	)
     +'
     +
    - test_done
    + test_expect_success '`scalar register` & `unregister` with existing repo' '
    + 	git init existing &&
    + 	scalar register existing &&
130:  d71a56e6a94 ! 121:  d7b7bfc23fc scalar clone: suppress warning about `init.defaultBranch`
    @@ contrib/scalar/scalar.c
      #include "run-command.h"
     +#include "refs.h"
      
    - static void setup_enlistment_directory(int argc, const char **argv,
    - 				       const char * const *usagestr,
    + /*
    +  * Remove the deepest subdirectory in the provided path string. Path must not
     @@ contrib/scalar/scalar.c: static int cmd_clone(int argc, const char **argv)
      
      	dir = xstrfmt("%s/src", enlistment);
131:  ab69d1cf28a = 122:  a52a3b7baa0 scalar clone: respect --single-branch
132:  778b6291469 ! 123:  3430f62db08 scalar: implement the `run` command
    @@ contrib/scalar/scalar.c: static int cmd_register(int argc, const char **argv)
     +
     +	argc--;
     +	argv++;
    -+	setup_enlistment_directory(argc, argv, usagestr, options);
    ++	setup_enlistment_directory(argc, argv, usagestr, options, NULL);
     +	strbuf_release(&buf);
     +
     +	if (i == 0)
    @@ contrib/scalar/scalar.c: static int cmd_register(int argc, const char **argv)
     +	return 0;
     +}
     +
    - static int cmd_unregister(int argc, const char **argv)
    + static int remove_deleted_enlistment(struct strbuf *path)
      {
    - 	struct option options[] = {
    -@@ contrib/scalar/scalar.c: struct {
    + 	int res = 0;
    +@@ contrib/scalar/scalar.c: static struct {
      	{ "list", cmd_list },
      	{ "register", cmd_register },
      	{ "unregister", cmd_unregister },
133:  774149599e8 ! 124:  aee3f62c81a scalar: allow reconfiguring an existing enlistment
    @@ contrib/scalar/scalar.c: static int cmd_register(int argc, const char **argv)
     +	argc = parse_options(argc, argv, NULL, options,
     +			     usage, 0);
     +
    -+	setup_enlistment_directory(argc, argv, usage, options);
    ++	setup_enlistment_directory(argc, argv, usage, options, NULL);
     +
     +	return set_recommended_config(1);
     +}
    @@ contrib/scalar/scalar.c: static int cmd_register(int argc, const char **argv)
      static int cmd_run(int argc, const char **argv)
      {
      	struct option options[] = {
    -@@ contrib/scalar/scalar.c: struct {
    +@@ contrib/scalar/scalar.c: static struct {
      	{ "register", cmd_register },
      	{ "unregister", cmd_unregister },
      	{ "run", cmd_run },
    @@ contrib/scalar/t/t9099-scalar.sh: test_expect_success 'scalar clone' '
     +	test true = "$(git -C one/src config core.preloadIndex)"
     +'
     +
    - test_done
    + test_expect_success '`scalar register` & `unregister` with existing repo' '
    + 	git init existing &&
    + 	scalar register existing &&
134:  ab55d0b8f3a ! 125:  f4317dd05f4 scalar reconfigure: optionally handle all registered enlistments
    @@ contrib/scalar/scalar.c: static int cmd_register(int argc, const char **argv)
      	argc = parse_options(argc, argv, NULL, options,
      			     usage, 0);
      
    --	setup_enlistment_directory(argc, argv, usage, options);
    +-	setup_enlistment_directory(argc, argv, usage, options, NULL);
     +	if (!all) {
    -+		setup_enlistment_directory(argc, argv, usage, options);
    ++		setup_enlistment_directory(argc, argv, usage, options, NULL);
     +
     +		return set_recommended_config(1);
     +	}
135:  36f29e71c59 = 126:  1bb016d58d3 scalar: support the `config` command for backwards compatibility
136:  3bcddb11ee6 ! 127:  91e1e3c36fa Implement `scalar diagnose`
    @@ contrib/scalar/scalar.c
     +#include "help.h"
     +#include "dir.h"
      
    - static void setup_enlistment_directory(int argc, const char **argv,
    - 				       const char * const *usagestr,
    + /*
    +  * Remove the deepest subdirectory in the provided path string. Path must not
     @@ contrib/scalar/scalar.c: static int unregister_dir(void)
      	return res;
      }
    @@ contrib/scalar/scalar.c: static int cmd_clone(int argc, const char **argv)
     +	argc = parse_options(argc, argv, NULL, options,
     +			     usage, 0);
     +
    -+	setup_enlistment_directory(argc, argv, usage, options);
    ++	setup_enlistment_directory(argc, argv, usage, options, &buf);
     +
    -+	strbuf_addstr(&buf, "../.scalarDiagnostics/scalar_");
    ++	strbuf_addstr(&buf, "/.scalarDiagnostics/scalar_");
     +	strbuf_addftime(&buf, "%Y%m%d_%H%M%S", localtime_r(&now, &tm), 0, 0);
     +	if (run_git("init", "-q", "-b", "dummy", "--bare", buf.buf, NULL)) {
     +		res = error(_("could not initialize temporary repository: %s"),
    @@ contrib/scalar/scalar.c: static int cmd_clone(int argc, const char **argv)
      static int cmd_list(int argc, const char **argv)
      {
      	if (argc != 1)
    -@@ contrib/scalar/scalar.c: struct {
    +@@ contrib/scalar/scalar.c: static struct {
      	{ "unregister", cmd_unregister },
      	{ "run", cmd_run },
      	{ "reconfigure", cmd_reconfigure },
137:  c2306dbb1bf = 128:  018cd881b62 scalar diagnose: include disk space information
138:  1241cb438cd = 129:  354d45442a4 scalar: teach `diagnose` to gather packfile info
139:  c21a90d704d = 130:  57be57a21f7 scalar: teach `diagnose` to gather loose objs info
140:  7b2e9bcb18a = 131:  dfc57743109 scalar diagnose: show a spinner while staging content
141:  4a6af34fe6c ! 132:  4e1582101ca scalar: implement the `delete` command
    @@ Commit message
         then change to the parent of the enlistment directory, to allow us to
         delete the enlistment.
     
    +    Co-authored-by: Victoria Dye <[email protected]>
    +    Signed-off-by: Victoria Dye <[email protected]>
         Signed-off-by: Matthew John Cheetham <[email protected]>
         Signed-off-by: Johannes Schindelin <[email protected]>
     
    @@ contrib/scalar/scalar.c: static char *remote_default_branch(const char *url)
      	return NULL;
      }
      
    -+static void strbuf_parentdir(struct strbuf *buf)
    ++static int delete_enlistment(struct strbuf *enlistment)
     +{
    -+	int len = buf->len;
    -+	while (len > 0 && !is_dir_sep(buf->buf[--len]))
    -+		; /* keep looking for parent directory */
    -+	strbuf_setlen(buf, len);
    -+}
    -+
    -+static int delete_enlistment(void)
    -+{
    -+	struct strbuf enlistment = STRBUF_INIT;
     +#ifdef WIN32
     +	struct strbuf parent = STRBUF_INIT;
     +#endif
    @@ contrib/scalar/scalar.c: static char *remote_default_branch(const char *url)
     +	if (unregister_dir())
     +		die(_("failed to unregister repository"));
     +
    -+	/* Compute the enlistment path (parent of the worktree) */
    -+	strbuf_addstr(&enlistment, the_repository->worktree);
    -+	strbuf_parentdir(&enlistment);
    -+
     +#ifdef WIN32
     +	/* Change current directory to one outside of the enlistment
     +	   so that we may delete everything underneath it. */
    -+	strbuf_addbuf(&parent, &enlistment);
    ++	strbuf_addbuf(&parent, enlistment);
     +	strbuf_parentdir(&parent);
     +	if (chdir(parent.buf) < 0)
     +		die_errno(_("could not switch to '%s'"), parent.buf);
     +	strbuf_release(&parent);
     +#endif
     +
    -+	if (remove_dir_recursively(&enlistment, 0))
    ++	if (remove_dir_recursively(enlistment, 0))
     +		die(_("failed to delete enlistment directory"));
     +
    -+	strbuf_release(&enlistment);
     +	return 0;
     +}
     +
    @@ contrib/scalar/scalar.c: static int cmd_unregister(int argc, const char **argv)
     +		N_("scalar delete <enlistment>"),
     +		NULL
     +	};
    ++	struct strbuf enlistment = STRBUF_INIT;
    ++	int res = 0;
     +
     +	argc = parse_options(argc, argv, NULL, options,
     +			     usage, 0);
    @@ contrib/scalar/scalar.c: static int cmd_unregister(int argc, const char **argv)
     +	if (argc != 1)
     +		usage_with_options(usage, options);
     +
    -+	setup_enlistment_directory(argc, argv, usage, options);
    ++	setup_enlistment_directory(argc, argv, usage, options, &enlistment);
     +
    -+	return delete_enlistment();
    ++	res = delete_enlistment(&enlistment);
    ++	strbuf_release(&enlistment);
    ++
    ++	return res;
     +}
     +
    - struct {
    + static struct {
      	const char *name;
      	int (*fn)(int, const char **);
    -@@ contrib/scalar/scalar.c: struct {
    +@@ contrib/scalar/scalar.c: static struct {
      	{ "run", cmd_run },
      	{ "reconfigure", cmd_reconfigure },
      	{ "diagnose", cmd_diagnose },
    @@ contrib/scalar/t/t9099-scalar.sh: test_expect_success 'scalar reconfigure' '
     +	scalar delete cloned &&
     +	test_path_is_missing cloned
     +'
    ++
    ++test_expect_success '`scalar register` parallel to worktree' '
    ++	git init test-repo/src &&
    ++	mkdir -p test-repo/out &&
    ++	scalar register test-repo/out &&
    ++	git config --get --global --fixed-value \
    ++		maintenance.repo "$(pwd)/test-repo/src" &&
    ++	scalar list >scalar.repos &&
    ++	grep -F "$(pwd)/test-repo/src" scalar.repos &&
    ++	scalar delete test-repo
    ++'
    ++
    + test_expect_success '`scalar register` & `unregister` with existing repo' '
    + 	git init existing &&
    + 	scalar register existing &&
    +@@ contrib/scalar/t/t9099-scalar.sh: test_expect_success '`scalar register` existing repo with `src` folder' '
    + 	! grep -F "$(pwd)/existing" scalar.repos
    + '
    + 
    ++test_expect_success '`scalar delete` with existing repo' '
    ++	git init existing &&
    ++	scalar register existing &&
    ++	scalar delete existing &&
    ++	test_path_is_missing existing
    ++'
     +
      test_done
142:  04dc1c21b99 ! 133:  c6de7ef8e97 scalar: implement the `version` command
    @@ Commit message
     
      ## contrib/scalar/scalar.c ##
     @@ contrib/scalar/scalar.c: static int cmd_delete(int argc, const char **argv)
    - 	return delete_enlistment();
    + 	return res;
      }
      
     +static int cmd_version(int argc, const char **argv)
    @@ contrib/scalar/scalar.c: static int cmd_delete(int argc, const char **argv)
     +	return 0;
     +}
     +
    - struct {
    + static struct {
      	const char *name;
      	int (*fn)(int, const char **);
    -@@ contrib/scalar/scalar.c: struct {
    +@@ contrib/scalar/scalar.c: static struct {
      	{ "reconfigure", cmd_reconfigure },
      	{ "diagnose", cmd_diagnose },
      	{ "delete", cmd_delete },
143:  436b5b30f04 = 134:  6000b088d3a scalar: accept -C and -c options before the subcommand
144:  ca842cc8a25 ! 135:  9368cbfcad6 scalar: enable built-in FSMonitor on `register`
    @@ contrib/scalar/scalar.c
     +#include "simple-ipc.h"
     +#include "fsmonitor-ipc.h"
      
    - static void setup_enlistment_directory(int argc, const char **argv,
    - 				       const char * const *usagestr,
    + /*
    +  * Remove the deepest subdirectory in the provided path string. Path must not
     @@ contrib/scalar/scalar.c: static int set_recommended_config(int reconfigure)
      		{ "maintenance.incremental-repack.enabled", "true" },
      		{ "maintenance.incremental-repack.auto", "0" },
145:  ac405741360 = 136:  5379a346861 scalar unregister: stop FSMonitor daemon
146:  8ea46dcc3f1 = 137:  8b106718b3e scalar: start documenting the command
147:  1db27f0a597 = 138:  b098a090f2b scalar: document the `clone` subcommand
148:  bd3f8db9721 = 139:  7b23cf0d43a scalar: document `list`, `register` and `unregister`
149:  466e62a7daf = 140:  2bc882cfe08 scalar: document the remaining subcommands
150:  6333d1ad39a = 141:  807eed5fc0a git_config_set_multivar_in_file_gently(): add a lock timeout
  1:  62ad2255e6e = 142:  7b896d106f5 maintenance: create `launchctl` configuration using a lock file
151:  e002d4f05db = 143:  d9fa3987f6d scalar: set the config write-lock timeout to 150ms
  2:  b0d6bb0b07f = 144:  8464b821d4d maintenance: skip bootout/bootstrap when plist is registered
114:  62da4b12dc8 <   -:  ----------- fixup! Adding winget workflows
115:  bc40a560d3c <   -:  ----------- fixup! fsmonitor: introduce `core.useBuiltinFSMonitor` to call the daemon via IPC
116:  7eb8372f9fd <   -:  ----------- fixup! fsmonitor-ipc: create client routines for git-fsmonitor--daemon
117:  5ec07b8c4a8 <   -:  ----------- fixup! fsmonitor--daemon: use a cookie file to sync with file system
118:  92392daccb3 <   -:  ----------- fixup! fsmonitor--daemon: use a cookie file to sync with file system
119:  cfa0c45308e <   -:  ----------- fixup! fsmonitor--daemon: use a cookie file to sync with file system
152:  397fd90bc27 = 145:  f231d0f3752 git help: special-case `scalar`
153:  74a7916a676 ! 146:  00ad7ac24db scalar: implement the `help` subcommand
    @@ Commit message
     
      ## contrib/scalar/scalar.c ##
     @@ contrib/scalar/scalar.c: static int cmd_delete(int argc, const char **argv)
    - 	return delete_enlistment();
    + 	return res;
      }
      
     +static int cmd_help(int argc, const char **argv)
    @@ contrib/scalar/scalar.c: static int cmd_delete(int argc, const char **argv)
      static int cmd_version(int argc, const char **argv)
      {
      	int verbose = 0, build_options = 0;
    -@@ contrib/scalar/scalar.c: struct {
    +@@ contrib/scalar/scalar.c: static struct {
      	{ "reconfigure", cmd_reconfigure },
      	{ "diagnose", cmd_diagnose },
      	{ "delete", cmd_delete },
154:  c1d1d7bf6a0 = 147:  af1535a39c3 Optionally include `scalar` when building/testing Git
155:  a138542843e = 148:  9344672b6fd NOT-TO-UPSTREAM: ci: build `scalar.exe`, too
156:  4bc32bcbfb3 ! 149:  4e0ddc9c10d ci(windows): also run `scalar` tests
    @@ Commit message
     
      ## ci/run-test-slice.sh ##
     @@ ci/run-test-slice.sh: make --quiet -C t T="$(cd t &&
    - 	./helper/test-tool path-utils slice-tests "$1" "$2" t[0-9]*.sh |
    - 	tr '\n' ' ')"
    + # Run the git subtree tests only if main tests succeeded
    + test 0 != "$1" || make -C contrib/subtree test
      
     +if test 0 = "$1" && test -n "$INCLUDE_SCALAR"
     +then
157:  e564a258b10 ! 150:  9ec6f61cac3 scalar: implement a minimal JSON parser
    @@ contrib/scalar/json-parser.c (new)
     +
     +	return it->fn(it);
     +}
    - \ No newline at end of file
     
      ## contrib/scalar/json-parser.h (new) ##
     @@
    @@ contrib/scalar/json-parser.h (new)
     +int iterate_json(struct json_iterator *it);
     +
     +#endif
    - \ No newline at end of file
158:  6791e656f67 ! 151:  413d1830b15 scalar clone: support GVFS-enabled remote repositories
    @@ contrib/scalar/scalar.c
      #include "fsmonitor-ipc.h"
     +#include "json-parser.h"
      
    - static void setup_enlistment_directory(int argc, const char **argv,
    - 				       const char * const *usagestr,
    + /*
    +  * Remove the deepest subdirectory in the provided path string. Path must not
     @@ contrib/scalar/scalar.c: static int set_config(const char *fmt, ...)
      	return res;
      }
    @@ contrib/scalar/scalar.c: static int cmd_clone(int argc, const char **argv)
     +			res = error(_("could not configure cache server"));
     +			goto cleanup;
     +		}
    ++		if (cache_server_url)
    ++			fprintf(stderr, "Cache server URL: %s\n",
    ++				cache_server_url);
     +	} else {
     +		if (set_config("core.useGVFSHelper=false") ||
     +		    set_config("remote.origin.promisor=true") ||
159:  e19274c65ff = 152:  82e799c74d6 test-gvfs-protocol: also serve smart protocol
160:  fe67f85fc33 = 153:  45cf8d71c3a gvfs-helper: add the `endpoint` command
161:  16f782d1cbd = 154:  86b3106d7f0 dir_inside_of(): handle directory separators correctly
162:  5446f83c55b ! 155:  5bff0ac9acc scalar: disable authentication in unattended mode
    @@ Metadata
      ## Commit message ##
         scalar: disable authentication in unattended mode
     
    -    Signed-off-by: Johannes Schindelin <[email protected]>
    +    Modified to remove call to is_unattended() that has not been implemented
    +    yet.
     
    - ## contrib/scalar/scalar.c ##
    -@@ contrib/scalar/scalar.c: int cmd_main(int argc, const char **argv)
    - 	struct strbuf scalar_usage = STRBUF_INIT;
    - 	int i;
    - 
    -+	if (is_unattended()) {
    -+		setenv("GIT_ASKPASS", "", 0);
    -+		setenv("GIT_TERMINAL_PROMPT", "false", 0);
    -+		git_config_push_parameter("credential.interactive=never");
    -+	}
    -+
    - 	while (argc > 1 && *argv[1] == '-') {
    - 		if (!strcmp(argv[1], "-C")) {
    - 			if (argc < 3)
    +    Signed-off-by: Johannes Schindelin <[email protected]>
     
      ## contrib/scalar/t/t9099-scalar.sh ##
     @@ contrib/scalar/t/t9099-scalar.sh: PATH=$(pwd)/..:$PATH
163:  b654d0aba62 ! 156:  5fc49773480 scalar: do initialize `gvfs.sharedCache`
    @@ Commit message
     
             ~/.cache/scalar on Linux
     
    +    Modified to include call to is_unattended() that was removed from a
    +    previous commit.
    +
         Signed-off-by: Johannes Schindelin <[email protected]>
     
      ## contrib/scalar/scalar.c ##
    @@ contrib/scalar/scalar.c
     +	return git_env_bool("Scalar_UNATTENDED", 0);
     +}
     +
    - static void setup_enlistment_directory(int argc, const char **argv,
    - 				       const char * const *usagestr,
    - 				       const struct option *options)
    + /*
    +  * Remove the deepest subdirectory in the provided path string. Path must not
    +  * include a trailing path separator. Returns 1 if parent directory found,
     @@ contrib/scalar/scalar.c: static int run_git(const char *arg, ...)
      	return res;
      }
    @@ contrib/scalar/scalar.c: static int cmd_diagnose(int argc, const char **argv)
      
      	return res;
      }
    +@@ contrib/scalar/scalar.c: int cmd_main(int argc, const char **argv)
    + 	struct strbuf scalar_usage = STRBUF_INIT;
    + 	int i;
    + 
    ++	if (is_unattended()) {
    ++		setenv("GIT_ASKPASS", "", 0);
    ++		setenv("GIT_TERMINAL_PROMPT", "false", 0);
    ++		git_config_push_parameter("credential.interactive=never");
    ++	}
    ++
    + 	while (argc > 1 && *argv[1] == '-') {
    + 		if (!strcmp(argv[1], "-C")) {
    + 			if (argc < 3)
     
      ## contrib/scalar/scalar.txt ##
     @@ contrib/scalar/scalar.txt: SYNOPSIS
164:  054bb04f8af = 157:  3c830cb4499 scalar diagnose: include shared cache info
165:  53e18006a5d = 158:  f66c73fce27 scalar: only try GVFS protocol on https:// URLs
166:  1c60e62ee95 ! 159:  47f5e3dff85 scalar: verify that we can use a GVFS-enabled repository
    @@ contrib/scalar/t/t9099-scalar.sh: test_expect_success 'scalar delete with enlist
     +	)
     +'
     +
    - test_done
    + test_expect_success '`scalar register` parallel to worktree' '
    + 	git init test-repo/src &&
    + 	mkdir -p test-repo/out &&
167:  4d7e39976ff ! 160:  18a0787623d scalar: add the `cache-server` command
    @@ contrib/scalar/scalar.c: static int cmd_version(int argc, const char **argv)
     +		usage_msg_opt(_("--get/--set/--list are mutually exclusive"),
     +			      usage, options);
     +
    -+	setup_enlistment_directory(argc, argv, usage, options);
    ++	setup_enlistment_directory(argc, argv, usage, options, NULL);
     +
     +	if (list) {
     +		const char *name = list, *url = list;
    @@ contrib/scalar/scalar.c: static int cmd_version(int argc, const char **argv)
     +				free(list);
     +				return 1;
     +			}
    -+			if (remote->url == 0) {
    ++			if (!remote->url) {
     +				free(list);
     +				return error(_("remote '%s' has no URLs"),
     +					     name);
    @@ contrib/scalar/scalar.c: static int cmd_version(int argc, const char **argv)
     +	return !!res;
     +}
     +
    - struct {
    + static struct {
      	const char *name;
      	int (*fn)(int, const char **);
    -@@ contrib/scalar/scalar.c: struct {
    +@@ contrib/scalar/scalar.c: static struct {
      	{ "delete", cmd_delete },
      	{ "help", cmd_help },
      	{ "version", cmd_version },
168:  80f416d2e0c = 161:  5c5eccdb116 scalar: add a test toggle to skip accessing the vsts/info endpoint
121:  e5266eb2d5b = 162:  dda96527c1f ci: run Scalar functional tests for PRs against features/*
127:  2babaf15a09 <   -:  ----------- scalar unregister: handle deleted enlistment directory gracefully
169:  5f9e8da9241 <   -:  ----------- fixup! ci: run Scalar's Functional Tests
170:  244d94a554b <   -:  ----------- fixup! ci: run Scalar's Functional Tests
171:  00377dc74f8 <   -:  ----------- fixup! ci: run Scalar's Functional Tests
172:  cd4928c6324 <   -:  ----------- fixup! Trace2:gvfs:experiment: capture more 'tracking' details
173:  73ed7c848f1 <   -:  ----------- fixup! gvfs-helper: add prefetch support
174:  7acb919a607 = 163:  b89d3805035 scalar: use microsoft/scalar:main for tests
175:  3ecd5c8e785 <   -:  ----------- fixup! Adding readme for microsoft/git
176:  1e511a47e1a ! 164:  ab42168d94a scalar: add docs from microsoft/scalar
    @@ contrib/scalar/docs/faq.md (new)
     +
     +Your build system may create build artifacts such as `.obj` or `.lib` files
     +next to your source code. These are commonly "hidden" from Git using
    -+`.gitignore` files. Having such artifacts into your source tree creates
    ++`.gitignore` files. Having such artifacts in your source tree creates
     +additional work for Git because it needs to look at these files and match them
     +against the `.gitignore` patterns.
     +
    -+By following the pattern Scalar tries to establish and placing your build
    ++By following the `src` pattern Scalar tries to establish and placing your build
     +intermediates and outputs parallel with the `src` folder and not inside it,
     +you can help optimize Git command performance for developers in the repository
     +by limiting the number of files Git needs to consider for many common
    @@ contrib/scalar/docs/getting-started.md (new)
     +---------------------------------------------------
     +
     +The `clone` verb creates a local enlistment of a remote repository using the
    -+[GVFS protocol](https://github.com/microsoft/VFSForGit/blob/HEAD/Protocol.md).
    ++[GVFS protocol](https://github.com/microsoft/VFSForGit/blob/HEAD/Protocol.md),
    ++such as Azure Repos.
     +
     +```
     +scalar clone [options] <url> [<dir>]
    @@ contrib/scalar/docs/getting-started.md (new)
     +
     +### Sparse Repo Mode
     +
    -+By default, Scalar reduces your working directory to the only the files at the
    ++By default, Scalar reduces your working directory to only the files at the
     +root of the repository. You need to add the folders you care about to build up
     +to your working set.
     +
    @@ contrib/scalar/docs/getting-started.md (new)
     +  tree. No folders are populated.
     +* Set the directory list for your sparse-checkout using:
     +	1. `git sparse-checkout set <dir1> <dir2> ...`
    -+	2. `git sparse-checkout set --stdin <dir-list.txt`
    ++	2. `git sparse-checkout set --stdin < dir-list.txt`
     +* Run git commands as you normally would.
     +* To fully populate your working directory, run `git sparse-checkout disable`.
     +
    @@ contrib/scalar/docs/index.md (new)
     +* *Partial clone:* reduces time to get a working repository by not
     +  downloading all Git objects right away.
     +
    ++* *Background prefetch:* downloads Git object data from all remotes every
    ++  hour, reducing the amount of time for foreground `git fetch` calls.
    ++
     +* *Sparse-checkout:* limits the size of your working directory.
     +
     +* *File system monitor:* tracks the recently modified files and eliminates
    @@ contrib/scalar/docs/index.md (new)
     +
     +By running `scalar register` in any Git repo, Scalar will automatically enable
     +these features for that repo (except partial clone) and start running suggested
    -+maintenance in the background.
    ++maintenance in the background using
    ++[the `git maintenance` feature](https://git-scm.com/docs/git-maintenance).
     +
     +Repos cloned with the `scalar clone` command use partial clone or the
     +[GVFS protocol](https://github.com/microsoft/VFSForGit/blob/HEAD/Protocol.md)
    @@ contrib/scalar/docs/philosophy.md (new)
     +  remotes.
     +* Advanced data structures, such as the `commit-graph` and `multi-pack-index`
     +  are updated automatically in the background.
    -+* If Watchman is installed, then the FileSystem Monitor hook is configured
    -+  to use Watchman's change-tracking, providing faster commands such as
    -+  `git status` or `git add`.
    ++* If using macOS or Windows, then Scalar configures Git's builtin File System
    ++  Monitor, providing faster commands such as `git status` or `git add`.
     +
     +Additionally, if you use `scalar clone` to create a new repository, then
     +you will automatically get these benefits:
177:  a0cd01d5459 <   -:  ----------- fixup! Adding readme for microsoft/git
178:  66785729e8d <   -:  ----------- fixup! Adding readme for microsoft/git
179:  495f0e59cee <   -:  ----------- fixup! Adding readme for microsoft/git
180:  7020326c3b5 <   -:  ----------- fixup! Adding readme for microsoft/git
181:  b558bbd115b <   -:  ----------- fixup! Adding readme for microsoft/git
182:  516b1d46b92 <   -:  ----------- fixup! scalar: add docs from microsoft/scalar
183:  5c6e1468851 <   -:  ----------- fixup! Adding readme for microsoft/git
184:  ebebe50bbc3 <   -:  ----------- fixup! scalar: add docs from microsoft/scalar
185:  3658a536fd6 = 165:  2f6565c3883 scalar: add retry logic to run_git()
186:  cbad21ea2da = 166:  521a7f3a4ef scalar: only retry a full clone if not using GVFS protocol
187:  e7d891079ed <   -:  ----------- fixup! Adding readme for microsoft/git
188:  224b21144d6 ! 167:  8ea0db86758 gvfs: disable the built-in FSMonitor
    @@ Commit message
     
         Signed-off-by: Johannes Schindelin <[email protected]>
     
    - ## repo-settings.c ##
    -@@ repo-settings.c: void prepare_repo_settings(struct repository *r)
    - 		r->settings.core_multi_pack_index = value;
    - 	UPDATE_DEFAULT_BOOL(r->settings.core_multi_pack_index, 1);
    + ## fsmonitor-ipc.c ##
    +@@
    + #include "cache.h"
    ++#include "config.h"
    + #include "fsmonitor.h"
    + #include "simple-ipc.h"
    + #include "fsmonitor-ipc.h"
    +@@
      
    --	if (!repo_config_get_bool(r, "core.usebuiltinfsmonitor", &value) && value)
    -+	if (!git_config_get_virtualfilesystem() &&
    -+	    !repo_config_get_bool(r, "core.usebuiltinfsmonitor", &value) && value)
    - 		r->settings.use_builtin_fsmonitor = 1;
    + int fsmonitor_ipc__is_supported(void)
    + {
    ++	if (git_config_get_virtualfilesystem())
    ++		return 0;
    + 	return 1;
    + }
    + 
    +
    + ## fsmonitor-settings.c ##
    +@@ fsmonitor-settings.c: static void lookup_fsmonitor_settings(struct repository *r)
    + 
    + enum fsmonitor_mode fsm_settings__get_mode(struct repository *r)
    + {
    ++	if (git_config_get_virtualfilesystem())
    ++		return FSMONITOR_MODE_INCOMPATIBLE;
    ++
    + 	if (!r->settings.fsmonitor)
    + 		lookup_fsmonitor_settings(r);
      
    - 	if (!repo_config_get_bool(r, "feature.manyfiles", &value) && value) {
     
      ## t/t1093-virtualfilesystem.sh ##
     @@ t/t1093-virtualfilesystem.sh: test_expect_success 'folder with same prefix as file' '
    @@ t/t1093-virtualfilesystem.sh: test_expect_success 'folder with same prefix as fi
     +	write_script .git/hooks/virtualfilesystem <<-\EOF &&
     +		printf "dir1/\0"
     +	EOF
    ++	git config core.virtualfilesystem .git/hooks/virtualfilesystem &&
     +	git status &&
     +	test_must_fail git fsmonitor--daemon status
     +'
189:  29351a8f214 ! 168:  88c6b400a76 Update winget manifest
    @@ .github/workflows/release-winget.yml: jobs:
     +          Publisher: The Git Client Team at GitHub
                Moniker: microsoft-git
                PackageUrl: https://aka.ms/ms-git
    -           Tags: [ microsoft-git ]
    +-          Tags: [ microsoft-git ]
     -          License: Copyright (C) Microsoft Corporation
    ++          Tags:
    ++          - microsoft-git
     +          License: GPLv2
                ShortDescription: |
                  Git distribution to support monorepo scenarios.
    @@ .github/workflows/release-winget.yml: jobs:
                PackageLocale: en-US
                ManifestType: singleton
                ManifestVersion: 1.0.0
    --        alwaysUsePullRequest: true
    -+      alwaysUsePullRequest: true
190:  88119b020d6 <   -:  ----------- Fix alwaysUsePullRequest indentation
191:  84c35ff2824 <   -:  ----------- fixup! fsmonitor: introduce `core.useBuiltinFSMonitor` to call the daemon via IPC
192:  dd05572f76d <   -:  ----------- fixup! winget tag specification
193:  80ab57eb490 <   -:  ----------- fixup! fixup! winget tag specification
194:  b724211d25e = 169:  9eebae7a418 Add workflow for apt-get release
196:  979db78b38c ! 170:  7310da5e7e3 Add instructions for `apt-get` install to `README`
    @@ README.md: Or you can run the `git update-microsoft-git` command, which will run
     -For Ubuntu/Debian distributions, `apt-get` support is coming soon. For now, please use the most
     -recent [`.deb` package](https://github.com/microsoft/git/releases). For example, you can download a
     -specific version as follows:
    -+`apt-get` support is available for Ubuntu Bionic Beaver (18.04) and Hirsute 
    ++`apt-get` support is available for Ubuntu Bionic Beaver (18.04) and Hirsute
     +Hippo (21.04). Take the following steps to set up and install based on the
     +version you are running:
     +
    @@ README.md: Or you can run the `git update-microsoft-git` command, which will run
     +
     +### Other Ubuntu/Debian distributions
     +
    -+Please use the most recent 
    -+[`.deb` package](https://github.com/microsoft/git/releases). For example, 
    ++Please use the most recent
    ++[`.deb` package](https://github.com/microsoft/git/releases). For example,
     +you can download a specific version as follows:
      
      ```shell
195:  a5789d2fc4c = 171:  c2a298edfab add/rm: allow adding sparse entries when virtual
197:  c017a5416ea = 172:  2ee85fcc1b9 Clarify `workflow_dispatch` input description
198:  ed0b26a6581 <   -:  ----------- fixup! scalar clone: support GVFS-enabled remote repositories
199:  6ecf53a2b01 <   -:  ----------- fixup! Add instructions for `apt-get` install to `README`
200:  091c097e310 ! 173:  55818b1869a release: create initial Windows installer build workflow
    @@ .github/workflows/build-git-installers.yml (new)
     +  windows_artifacts:
     +    runs-on: windows-latest
     +    needs: [prereqs, windows_pkg]
    ++    env:
    ++      HOME: "${{github.workspace}}\\home"
     +    strategy:
     +      matrix:
     +        artifact:
    @@ .github/workflows/build-git-installers.yml (new)
     +        run: |
     +          set -x
     +
    -+          eval /usr/src/build-extra/please.sh make_installers_from_mingw_w64_git --version=${{ needs.prereqs.outputs.tag_version }} -o artifacts --${{matrix.artifact.name}} --pkg=pkg-x86_64/mingw-w64-x86_64-git-[0-9]*.tar.xz --pkg=pkg-x86_64/mingw-w64-x86_64-git-doc-html-[0-9]*.tar.xz &&
    ++          # Copy the PDB archive to the directory where `--include-pdbs` expects it
    ++          b=/usr/src/build-extra &&
    ++          mkdir -p $b/cached-source-packages &&
    ++          cp pkg-x86_64/*-pdb* $b/cached-source-packages/ &&
    ++
    ++          # Build the installer, embedding PDBs
    ++          eval $b/please.sh make_installers_from_mingw_w64_git --include-pdbs \
    ++              --version=${{ needs.prereqs.outputs.tag_version }} \
    ++              -o artifacts --${{matrix.artifact.name}} \
    ++              --pkg=pkg-x86_64/mingw-w64-x86_64-git-[0-9]*.tar.xz \
    ++              --pkg=pkg-x86_64/mingw-w64-x86_64-git-doc-html-[0-9]*.tar.xz &&
    ++
     +          if test portable = '${{matrix.artifact.name}}' && test -n "$(git config alias.signtool)"
     +          then
     +            git signtool artifacts/PortableGit-*.exe
     +          fi &&
     +          openssl dgst -sha256 artifacts/${{matrix.artifact.fileprefix}}-*.exe | sed "s/.* //" >artifacts/sha-256.txt
    -+      - name: Copy package-versions and pdbs
    -+        if: matrix.artifact.name == 'installer'
    -+        shell: bash
    -+        run: |
    -+          cp /usr/src/build-extra/installer/package-versions.txt artifacts/ &&
    -+
    -+          a=$PWD/artifacts &&
    -+          p=$PWD/pkg-x86_64 &&
    -+          (cd /usr/src/build-extra &&
    -+          mkdir -p cached-source-packages &&
    -+          cp "$p"/*-pdb* cached-source-packages/ &&
    -+          GIT_CONFIG_PARAMETERS="'windows.sdk64.path='" ./please.sh bundle_pdbs --arch=x86_64 --directory="$a" installer/package-versions.txt)
     +      - name: Publish ${{matrix.artifact.name}}-x86_64
     +        uses: actions/upload-artifact@v2
     +        with:
201:  9ecd68a2791 = 174:  0acbf8a1575 release: add Mac OSX installer build
202:  e18a7276c1a = 175:  6e865b53688 release: build unsigned Ubuntu .deb package
203:  e76aa357b60 = 176:  547bea3b2bd release: add signing step for .deb package
204:  2147b22658a = 177:  d76fbf56ffd release: create draft GitHub release with packages & installers
205:  1ba931c2d58 <   -:  ----------- fixup! release: create initial Windows installer build workflow
206:  449963172c2 <   -:  ----------- fixup! release: create initial Windows installer build workflow
  -:  ----------- > 178:  61a4750fcc2 Makefile: disable cURL warnings on gvfs-helper.c
207:  4a1a19d637b = 179:  2dddf82e03a git-rebase.txt: correct antiquated claims about --rebase-merges
208:  65b55e0cf20 = 180:  eb3b25edda7 directory-rename-detection.txt: small updates due to merge-ort optimizations
209:  1b3e9dc3c16 = 181:  aabae1c3ba8 Documentation: edit awkward references to `git merge-recursive`
210:  f397aa8aefe = 182:  6d75e85d666 merge-strategies.txt: update wording for the resolve strategy
211:  17b458ae7b5 = 183:  d4d57bacf8d merge-strategies.txt: do not imply using copy detection is desired
212:  692182b6673 = 184:  c0803c00b80 merge-strategies.txt: avoid giving special preference to patience algorithm
213:  893a59f9db7 = 185:  b20ebe9e2ed merge-strategies.txt: fix simple capitalization error
214:  4f3bd4c256a = 186:  656e23c4858 git-rebase.txt: correct out-of-date and misleading text about renames
215:  c267e29a870 = 187:  ef8fa513c6f merge-strategies.txt: add coverage of the `ort` merge strategy
216:  e33831bc760 = 188:  dab1c0d983a Update error message and code comment
217:  289882c6cc5 = 189:  9cbb481e32d Change default merge backend from recursive to ort
218:  8c23f8871b6 = 190:  68d56105c83 Update docs for change of default merge backend
219:  72061b80a65 <   -:  ----------- fixup! scalar register: set recommended config settings
220:  20597014cb6 <   -:  ----------- fixup! scalar register/unregister: start/stop maintenance on repository
222:  f05be53069b <   -:  ----------- fixup! scalar: implement the `run` command
223:  f6ca26f1add <   -:  ----------- fixup! scalar: allow reconfiguring an existing enlistment
224:  b59e481058a <   -:  ----------- fixup! Implement `scalar diagnose`
225:  ea3e35852e2 <   -:  ----------- fixup! scalar: implement the `delete` command
226:  1976a6ef208 <   -:  ----------- fixup! scalar: add the `cache-server` command
  -:  ----------- > 191:  322ee20a292 sparse-index: fix crash in status
  -:  ----------- > 192:  815f10a867d merge-ort: ignore skip-worktree bit with virtual filesystem

@derrickstolee derrickstolee self-assigned this Aug 3, 2021
@derrickstolee derrickstolee force-pushed the tentative/vfs-2.33.0 branch 6 times, most recently from 329bb48 to ba702dc Compare August 3, 2021 20:26
@dscho
Copy link
Member

dscho commented Aug 3, 2021

Looks good!

The skip_hash business looks like it might have been a bit of a nightmare to figure out, so glad that you did!

And sorry about the README.md changes, they probably caused some merge conflicts with all those fixup!s...

At some stage, we should squash all the gvfs.h/gvfs.c changes into a single commit, I guess...

@derrickstolee derrickstolee force-pushed the tentative/vfs-2.33.0 branch 4 times, most recently from 4dce453 to ee0447c Compare August 9, 2021 18:21
Kevin Willford and others added 16 commits August 17, 2021 06:45
While using the reset --stdin feature on windows path added may have a
\r at the end of the path that wasn't getting removed so didn't match
the path in the index and wasn't reset.

Signed-off-by: Kevin Willford <[email protected]>
Teach STATUS to optionally serialize the results of a
status computation to a file.

Teach STATUS to optionally read an existing serialization
file and simply print the results, rather than actually
scanning.

This is intended for immediate status results on extremely
large repos and assumes the use of a service/daemon to
maintain a fresh current status snapshot.

Signed-off-by: Jeff Hostetler <[email protected]>
Teach status serialization to take an optional pathname on
the command line to direct that cache data be written there
rather than to stdout.  When used this way, normal status
results will still be written to stdout.

When no path is given, only binary serialization data is
written to stdout.

Usage:
    git status --serialize[=<path>]

Signed-off-by: Jeff Hostetler <[email protected]>
This adds hard-coded call to GVFS.hooks.exe before and after each Git
command runs.

To make sure that this is only called on repositories cloned with GVFS, we
test for the tell-tale .gvfs.

Signed-off-by: Ben Peart <[email protected]>
Teach status deserialize code to reject status cache
when printing in porcelain V2 and there are unresolved
conflicts in the cache file.  A follow-on task might
extend the cache format to include this additiona data.

See code for longer explanation.

Signed-off-by: Jeff Hostetler <[email protected]>
Add trace2 region around read_object_process to collect
time spent waiting for missing objects to be dynamically
fetched.

Signed-off-by: Jeff Hostetler <[email protected]>
Since we really want to be based on a `.vfs.*` tag, let's make sure that
there was a new-enough one, i.e. one that agrees with the first three
version numbers of the recorded default version.

This prevents e.g. v2.22.0.vfs.0.<some-huge-number>.<commit> from being
used when the current release train was not yet tagged.

It is important to get the first three numbers of the version right
because e.g. Scalar makes decisions depending on those (such as assuming
that the `git maintenance` built-in is not available, even though it
actually _is_ available).

Signed-off-by: Johannes Schindelin <[email protected]>
Suggested by Ben Peart.

Signed-off-by: Johannes Schindelin <[email protected]>
Fix "git status --deserialize" to correctly report both pathnames
for renames.  Add a test case to verify.

A change was made upstream that added an additional "rename_status"
field to the "struct wt_status_change_data" structure.  It is used
during the various print routines to decide if 2 pathnames need to
be printed.

    5134ccd
    wt-status.c: rename rename-related fields in wt_status_change_data

The fix here is to add that field to the status cache data.

Signed-off-by: Jeff Hostetler <[email protected]>
Add trace2 region and data events describing attempts to deserialize
status data using a status cache.

A category:status, label:deserialize region is pushed around the
deserialize code.

Deserialization results when reading from a file are:
    category:status, path   = <path>
    category:status, polled = <number_of_attempts>
    category:status, result = "ok" | "reject"

When reading from STDIN are:
    category:status, path   = "STDIN"
    category:status, result = "ok" | "reject"

Status will fallback and run a normal status scan when a "reject"
is reported (unless "--deserialize-wait=fail").

If "ok" is reported, status was able to use the status cache and
avoid scanning the workdir.

Additionally, a cmd_mode is emitted for each step: collection,
deserialization, and serialization.  For example, if deserialization
is attempted and fails and status falls back to actually computing
the status, a cmd_mode message containing "deserialize" is issued
and then a cmd_mode for "collect" is issued.

Also, if deserialization fails, a data message containing the
rejection reason is emitted.

Signed-off-by: Jeff Hostetler <[email protected]>
This header file will accumulate GVFS-specific definitions.

Signed-off-by: Kevin Willford <[email protected]>
Changes to the global or repo-local excludes files can change the
results returned by "git status" for untracked files.  Therefore,
it is important that the exclude-file values used during serialization
are still current at the time of deserialization.

Teach "git status --serialize" to report metadata on the user's global
exclude file (which defaults to "$XDG_HOME/git/ignore") and for the
repo-local excludes file (which is in ".git/info/excludes").  Serialize
will record the pathnames and mtimes for these files in the serialization
header (next to the mtime data for the .git/index file).

Teach "git status --deserialize" to validate this new metadata.  If either
exclude file has changed since the serialization-cache-file was written,
then deserialize will reject the cache file and force a full/normal status
run.

Signed-off-by: Jeff Hostetler <[email protected]>
Add trace information around status serialization.

Signed-off-by: Jeff Hostetler <[email protected]>
vdye and others added 21 commits August 17, 2021 06:46
- include `scalar`
- build *unsigned* .dmg & .pkg for target OS version 10.6
- upload artifacts to workflow
This adds support for releasing to Ubuntu repositories hosted
at http://packages.microsoft.com/ (hosting location for Microsoft's
official apt/yum repos). This allows users to install via apt-get on
Hirsute/Bionic. Details to configure appropriate repos can be found
here:

https://docs.microsoft.com/en-us/windows-server/administration/Linux-Package-Repository-for-Microsoft-Software).
Commit 58634db ("rebase: Allow merge strategies to be used when
rebasing", 2006-06-21) added the --merge option to git-rebase so that
renames could be detected (at least when using the `recursive` merge
backend).  However, git-am -3 gained that same ability in commit
579c9bb ("Use merge-recursive in git-am -3.", 2006-12-28).  As such,
the comment about being able to detect renames is not particularly
noteworthy.  Remove it.  While tweaking this description, add a quick
comment about when --merge became the default.

Signed-off-by: Elijah Newren <[email protected]>
- include `scalar`
- build & upload unsigned .deb package
Updating `README.md` with instructions for `apt-get` setup and install
for Ubuntu Bionic + Hirsute.
Upstream, a20f704 (add: warn when asked to update SKIP_WORKTREE entries,
2021-04-08) modified how 'git add <pathspec>' works with cache entries
marked with the SKIP_WORKTREE bit. The intention is to prevent a user
from accidentally adding a path that is outside their sparse-checkout
definition but somehow matches an existing index entry.

A similar change for 'git rm' happened in d5f4b82 (rm: honor sparse
checkout patterns, 2021-04-08).

This breaks when using the virtual filesystem in VFS for Git. It is
rare, but we could be in a scenario where the user has staged a change
and then the file is projected away. If the user re-adds the file, then
this warning causes the command to fail with the advise message.

Disable this logic when core_virtualfilesystem is enabled.

Signed-off-by: Derrick Stolee <[email protected]>
- sign using Azure-stored certificates & client
- sign on Windows agent via python script
- job skipped if credentials for accessing certificate aren't present
We are currently using 'Release tag' to describe the required input
to our `workflow_dispatch` trigger. This is inaccurate - this field
actually requires a 'Release id', which I discovered when testing
GCM Core `apt-get` deployments yesterday. Updating so that the
description doesn't confuse folks running the workflow for a release
that is not 'latest'.
Upstream, a20f704 (add: warn when asked to update SKIP_WORKTREE entries,
04-08-2021) modified how 'git add <pathspec>' works with cache entries
marked with the SKIP_WORKTREE bit. The intention is to prevent a user
from accidentally adding a path that is outside their sparse-checkout
definition but somehow matches an existing index entry.

This breaks when using the virtual filesystem in VFS for Git. It is
rare, but we could be in a scenario where the user has staged a change
and then the file is projected away. If the user re-adds the file, then
this warning causes the command to fail with the advise message.

Disable this logic when core_virtualfilesystem is enabled.

This should allow the VFS for Git functional tests to pass (at least
the ones in the default run). I'll create a `-pr` installer build to
check before merging this.
There were two locations in the code that referred to 'merge-recursive'
but which were also applicable to 'merge-ort'.  Update them to more
general wording.

Signed-off-by: Elijah Newren <[email protected]>
- create release & uploads artifact using Octokit
- use job "if" condition to handle uploading signed *or* unsigned .deb
Add instructions for `apt-get` install to `README`
There are a few reasons to switch the default:
  * Correctness
  * Extensibility
  * Performance

I'll provide some summaries about each.

=== Correctness ===

The original impetus for a new merge backend was to fix issues that were
difficult to fix within recursive's design.  The success with this goal
is perhaps most easily demonstrated by running the following:

  $ git grep -2 KNOWN_FAILURE t/ | grep -A 4 GIT_TEST_MERGE_ALGORITHM
  $ git grep test_expect_merge_algorithm.failure.success t/
  $ git grep test_expect_merge_algorithm.success.failure t/

In order, these greps show:

  * Seven sets of submodule tests (10 total tests) that fail with
    recursive but succeed with ort
  * 22 other tests that fail with recursive, but succeed with ort
  * 0 tests that pass with recursive, but fail with ort

=== Extensibility ===

Being able to perform merges without touching the working tree or index
makes it possible to create new features that were difficult with the
old backend:

  * Merging, cherry-picking, rebasing, reverting in bare repositories...
    or just on branches that aren't checked out.

  * `git diff AUTO_MERGE` -- ability to see what changes the user has
    made to resolve conflicts so far (see commit 5291828 ("merge-ort:
    write $GIT_DIR/AUTO_MERGE whenever we hit a conflict", 2021-03-20)

  * A --remerge-diff option for log/show, used to show diffs for merges
    that display the difference between what an automatic merge would
    have created and what was recorded in the merge.  (This option will
    often result in an empty diff because many merges are clean, but for
    the non-clean ones it will show how conflicts were fixed including
    the removal of conflict markers, and also show additional changes
    made outside of conflict regions to e.g. fix semantic conflicts.)

  * A --remerge-diff-only option for log/show, similar to --remerge-diff
    but also showing how cherry-picks or reverts differed from what an
    automatic cherry-pick or revert would provide.

The last three have been implemented already (though only one has been
submitted upstream so far; the others were waiting for performance work
to complete), and I still plan to implement the first one.

=== Performance ===

I'll quote from the summary of my final optimization for merge-ort
(while fixing the testcase name from 'no-renames' to 'few-renames'):

                               Timings

                                          Infinite
                 merge-       merge-     Parallelism
                recursive    recursive    of rename    merge-ort
                 v2.30.0      current     detection     current
                ----------   ---------   -----------   ---------
few-renames:      18.912 s    18.030 s     11.699 s     198.3 ms
mega-renames:   5964.031 s   361.281 s    203.886 s     661.8 ms
just-one-mega:   149.583 s    11.009 s      7.553 s     264.6 ms

                           Speedup factors

                                          Infinite
                 merge-       merge-     Parallelism
                recursive    recursive    of rename
                 v2.30.0      current     detection    merge-ort
                ----------   ---------   -----------   ---------
few-renames:        1           1.05         1.6           95
mega-renames:       1          16.5         29           9012
just-one-mega:      1          13.6         20            565

And, for partial clone users:

             Factor reduction in number of objects needed

                                          Infinite
                 merge-       merge-     Parallelism
                recursive    recursive    of rename
                 v2.30.0      current     detection    merge-ort
                ----------   ---------   -----------   ---------
mega-renames:       1            1            1          181.3

Signed-off-by: Elijah Newren <[email protected]>
Implement workflow to create GitHub release with attached `git` installers
Make it clear that `ort` is the default merge strategy now rather than
`recursive`, including moving `ort` to the front of the list of merge
strategies.

Signed-off-by: Elijah Newren <[email protected]>
The 'ort' strategy is a new algorithm to replace the 'recursive' merge strategy. I've been reviewing some of the performance patches upstream, many of which are already in Git 2.32.0 (more coming in 2.33.0) and even with the ones already included, it is a clear performance win for our large repos.

I tested on the Office monorepo and consistently saw merge times in the 5-6 second range. With the 'recursive' strategy, these would range from 7-20 seconds. My tests reproduced merges found within the commit history, and the ones that succeeded without conflicts matched the committed changes. There were even a few where the 'recursive' strategy did not resolve to the committed change, but the 'ort' version did (probably because of better rename detection).

Not only is this a beneficial performance change for our users across `microsoft/git`, it will be a critical step to allowing `git merge` to work quickly with sparse index. In my testing of a prototype, I was able to get `git merge` commands with sparse index and the 'ort' strategy down to 0.5-1.5 seconds in most cases. (Cases with a merge conflict outside of the sparse-checkout definition jumped back up to the 6-7 second range, which is expected, and should be rare.)

cc: @newren for awareness. Thanks for the patches! These were applied from those sent to the list via git#1055.
Copy the `index_state->dir_hash` back to the real istate
after expanding a sparse index.

A crash was observed in `git status` during some hashmap lookups with
corrupted hashmap entries.  During an index expansion, new cache-entries
are added to the `index_state->name_hash` and the `dir_hash` in a
temporary `index_state` variable `full`.  However, only the `name_hash`
hashmap from this temp variable was copied back into the real `istate`
variable.  The original copy of the `dir_hash` was incorrectly preserved.
If the table in the `full->dir_hash` hashmap were realloced, the
stale version (in `istate`) would be corrupted.

Signed-off-by: Jeff Hostetler <[email protected]>
Without this change, the mere conflicts start creating <path>~cruft
files on-disk, which is caught by the VFS for Git functional tests.

Signed-off-by: Derrick Stolee <[email protected]>
@derrickstolee
Copy link
Author

Just wanted to point out the late addition of 24f9d5c, which is necessary for the ORT strategy to work with VFS for Git. I'm going to update our internal documentation about updating things for VFS for Git, and watching out for ce_skip_worktree() checks is one of those things I'll mention.

derrickstolee added a commit to microsoft/VFSForGit that referenced this pull request Aug 17, 2021
Copy link

@vdye vdye left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The range-diff differences all line up with what's in the description, really impressive job handling all of this! My only question is, where did 322ee20 come from (you specifically noted 24f9d5c was added - wasn't sure if 322ee20 was a similar situation)?

@derrickstolee
Copy link
Author

derrickstolee commented Aug 17, 2021

The range-diff differences all line up with what's in the description, really impressive job handling all of this! My only question is, where did 322ee20 come from (you specifically noted 24f9d5c was added - wasn't sure if 322ee20 was a similar situation)?

You are absolutely right to ask.

  • 322ee20 was something that we realized was important after the sparse-index experimental release. It is a cherry-pick from sparse-index: fix crash in status #395 which was into features/sparse-index. Since it was only in the feature branch, it wasn't included directly in the rebase. It also didn't make it to the upstream release, but I've now submitted it as sparse-index: copy dir_hash in ensure_full_index() gitgitgadget/git#1017. Edit: it is also critical to release in the full version here because users of the experimental version will upgrade to v2.33.0.vfs.0.0 and could hit the crash again.

  • 24f9d5c was due to the VFS for Git functional tests failing with the ORT strategy. I thought I had tested the VFS for Git functional tests with that ORT change, but apparently I had not. Better late than never!

@derrickstolee derrickstolee merged commit 24f9d5c into vfs-2.33.0 Aug 17, 2021
@derrickstolee derrickstolee deleted the tentative/vfs-2.33.0 branch August 17, 2021 19:15
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.