gitup

This is pretty short-sighted. We're already at the point where IPv4 cannot support the "whole internet" any more, ISPs provide IPv4 via CGNAT or DSLITE to customers (sharing a single address for many of them) and there are quite some hosts out there reachable only via IPv6 (most notable here the "beefy" machines building official FreeBSD packages). This doesn't come as a surprise, we all knew 32bit addressing won't hold …
All this is chatter and manipulation of general opinion.
For about 8-15 years, marketers say that ipv4 is about to end, but ipv4 is not over yet!
And most of the internet segment is on ipv4.
We will use ipv4 for at least another 10 years!
 
LOL. Yes, and still you're not talking about the same thing. I didn't say IPv4 will go away any time soon. It won't, cause resistence from people just never wanting to change anything is extremely powerful.

The thing is: The fraction of the internet reachable via IPv4 will decrease further. It's already well below 100%. This is something you can't change, because there is just no other way.
 
Created a ticket.

# host git.freebsd.org
git.freebsd.org is an alias for gitmir.geo.freebsd.org.
gitmir.geo.freebsd.org has address 139.178.72.204
gitmir.geo.freebsd.org has IPv6 address 2604:1380:2000:9501::e6a:1
gitmir.geo.freebsd.org mail is handled by 0 .

# telnet 139.178.72.204 443
Trying 139.178.72.204...
Connected to gitmir.pkt.freebsd.org.
Escape character is '^]'.

q
HTTP/1.1 400 Bad Request
Server: nginx/1.18.0
Date: Thu, 06 May 2021 07:14:28 GMT
Content-Type: text/html
Content-Length: 157
Connection: close

<html>
<head><title>400 Bad Request</title></head>
<body>
<center><h1>400 Bad Request</h1></center>
<hr><center>nginx/1.18.0</center>
</body>
</html>
Connection closed by foreign host.


I assume that the problem is in the IPv4 address (139.178.72.204).
IPv6 in my system is completely removed, I do not need IPv6.
I will not build the system with ipv6!
 
LOL. Yes, and still you're not talking about the same thing. I didn't say IPv4 will go away any time soon. It won't, cause resistence from people just never wanting to change anything is extremely powerful.

The thing is: The fraction of the internet reachable via IPv4 will decrease further. It's already well below 100%. This is something you can't change, because there is just no other way.
well, the internet is for pr0n
so when pr0n moves to ipv6, v4 will die
 
well, the internet is for pr0n
so when pr0n moves to ipv6, v4 will die
LOL again, cause, this might actually happen earlier, don't you think? ;)
Especially the "big players" can't affort dropping IPv4 of course, so companies buy previously owned addresses (as good as new) for fantastic prices, hehe ;) I don't think this would apply so much to, well, this specific area 😈
 
FWIW, installing devel/git on my VPS with just 512MB of memory worked perfectly to clone ports and keep them updated.

The latest net/gitup failed with OOM (given I have 1.5G of swapfile, I still find that surprising).
 
this patch will reduce memory usage to about a third when loading cached remote files from /var/db/gitup/ports by avoiding lots reallocs in append_string

at the end of the function load_remote_files ram usage drops from 290MB to 80MB

Code:
--- gitup.c    2021-05-05 03:33:34.000000000 +0300
+++ /tmp/gitup.c    2021-05-08 00:33:44.685833000 +0300
@@ -707,8 +707,14 @@
     struct file_node *file = NULL;
     char             *line = NULL, *hash = NULL, *path = NULL, *remote_files = NULL;
     char              temp[BUFFER_UNIT_SMALL], base_path[BUFFER_UNIT_SMALL], *buffer = NULL, *temp_hash = NULL;
-    uint32_t          count = 0, remote_file_size = 0, buffer_size = 0, buffer_length = 0;
-
+    uint32_t          count = 0, remote_file_size = 0, buffer_size = 0, buffer_length = 0,sz_need;
+    char *tuffer, *yab;
+    buffer_size = 32768;
+    tuffer = malloc(buffer_size);
+    if(!tuffer) {
+     err(EXIT_FAILURE, "load_remote_file_list: malloc tuffer 1");
+     }
+    yab = tuffer;
     load_file(connection->remote_files, &remote_files, &remote_file_size);
 
     while ((line = strsep(&remote_files, "\n"))) {
@@ -723,12 +729,16 @@
            obj_tree for what has been read. */
 
         if (strlen(line) == 0) {
-            if (buffer != NULL) {
+            if (buffer_length) {
+                    buffer = malloc(buffer_length);
+                    if(!buffer)
+                     err(EXIT_FAILURE, "load_remote_file_list: malloc buffer");
+                    memcpy(buffer,tuffer,buffer_length);
                 if (connection->clone == false)
                     store_object(connection, 2, buffer, buffer_length, 0, 0, NULL);
 
                 buffer = NULL;
-                buffer_size = buffer_length = 0;
+                buffer_length = 0;
             }
 
             continue;
@@ -770,14 +780,29 @@
 
             /* Add the line to the buffer that will become the obj_tree for this directory. */
 
+            sz_need = strlen(line) + strlen(path) + 2 + 20;
+            if(buffer_length + sz_need > buffer_size) {
+            /*
+            printf("HIT %d %d\n",buffer_length + sz_need,buffer_size);
+            */
+             tuffer = realloc(tuffer,buffer_size + 32768);
+             if(!tuffer)
+               err(EXIT_FAILURE, "load_remote_file_list: tuffer malloc 2");
+             buffer_size += 32768;
+             }
+            yab = tuffer + buffer_length;
             temp_hash = illegible_hash(hash);
-
+            sprintf(yab,"%s %s",line,path);
+            memcpy(yab+strlen(yab) + 1,temp_hash,20);
+            yab += sz_need;
+            buffer_length += sz_need;
+            /*
             append_string(&buffer, &buffer_size, &buffer_length, line, strlen(line));
             append_string(&buffer, &buffer_size, &buffer_length, " ", 1);
             append_string(&buffer, &buffer_size, &buffer_length, path, strlen(path));
             append_string(&buffer, &buffer_size, &buffer_length, "\0", 1);
             append_string(&buffer, &buffer_size, &buffer_length, temp_hash, 20);
-
+            */
             free(temp_hash);
         }
 
@@ -785,7 +810,7 @@
 
         RB_INSERT(Tree_Remote_Path, &Remote_Path, file);
     }
-
+    free(tuffer);
     free(remote_files);
 }
 
the below patch makes a clone command for ports to use 300mb less ram
creates a backing file during unpack_objects() and only keeps the offset in the file in the stored object. (object->buffer is kept null)
when the object buffer is needed it is loaded from the backing file (/var/db/gitup/$repo.tmp)
Code:
--- /usr/ports/net/gitup/work/gitup-0.93/gitup.c    2021-05-09 07:57:39.000000000 +0300
+++ gitup.c    2021-05-10 19:12:50.942962000 +0300
@@ -70,7 +70,8 @@
     char     *ref_delta_hash;
     uint32_t  pack_offset;
     char     *buffer;
-    uint32_t  buffer_size;
+    uint32_t  buffer_size,file_offset;
+    char   can_free;
 };
 
 struct file_node {
@@ -119,6 +120,7 @@
     int                  verbosity;
     uint8_t              display_depth;
     char                *updating;
+    int                back_store;
 } connector;
 
 static void     append(char **, unsigned int *, const char *, size_t);
@@ -165,7 +167,32 @@
 static void     unpack_objects(connector *);
 static uint32_t unpack_variable_length_integer(char *, uint32_t *);
 static void     usage(const char *);
+static void     load_buffer(connector *,struct object_node *);
+static void    release_buffer(struct object_node *);
 
+static void release_buffer(struct object_node *obj)
+{
+if(!obj->can_free) {
+ // dont release non file backed objects
+ free(obj->buffer);
+ obj->buffer = NULL;
+ }
+}
+
+static void load_buffer(connector * connection,struct object_node *obj)
+{
+ int rd;
+ if(!obj->buffer) {
+  obj->buffer = malloc(obj->buffer_size);
+  if(!obj->buffer)
+   err(EXIT_FAILURE, "load_buffer: malloc");
+  lseek(connection->back_store,obj->file_offset,SEEK_SET);
+  rd = read(connection->back_store,obj->buffer,obj->buffer_size);
+  if(rd != (int)obj->buffer_size) {
+   err(EXIT_FAILURE, "load_buffer: read %d %d",rd,obj->buffer_size);
+   }
+  }
+ }
 /*
  * node_compare
  *
@@ -1734,7 +1761,6 @@
     char               *hash = NULL;
 
     hash = calculate_object_hash(buffer, buffer_size, type);
-
     /* Check to make sure the object doesn't already exist. */
 
     find.hash = hash;
@@ -1762,6 +1788,8 @@
         object->ref_delta_hash = (ref_delta_hash ? legible_hash(ref_delta_hash) : NULL);
         object->buffer         = buffer;
         object->buffer_size    = buffer_size;
+                object->can_free       = 1;
+                object->file_offset    = -1;
        
         if (connection->verbosity > 1)
             fprintf(stdout,
@@ -1798,9 +1826,20 @@
     uint32_t       file_size = 0, file_bits = 0, pack_offset = 0;
     uint32_t       lookup_offset = 0, position = 4;
     unsigned char  zlib_out[16384];
+        int nobj_old,tot_len = 0;
+        char remote_files_tmp[BUFFER_UNIT_SMALL];
 
     /* Check the pack version number. */
+        snprintf(remote_files_tmp, BUFFER_UNIT_SMALL,
+                "%s.tmp",
+                connection->remote_files);
+        connection->back_store = open(remote_files_tmp, O_WRONLY | O_CREAT | O_TRUNC);
 
+        if (connection->back_store == -1)
+                err(EXIT_FAILURE,
+                        "save_tmp: write file failure %s",
+                        remote_files_tmp);                   
+
     version   = (unsigned char)connection->response[position + 3];
     position += 4;
 
@@ -1904,6 +1943,8 @@
 
         inflateEnd(&stream);
         position += stream.total_in;
+                write(connection->back_store,buffer,buffer_size);
+                nobj_old = connection->objects;
        
         store_object(connection,
             object_type,
@@ -1912,9 +1953,23 @@
             pack_offset,
             index_delta,
             ref_delta_hash);
-
+                if(nobj_old != connection->objects) {
+                 connection->object[nobj_old]->buffer = NULL;
+                 connection->object[nobj_old]->can_free = 0;
+                 connection->object[nobj_old]->file_offset = tot_len;
+                 }
+                tot_len += buffer_size;
+            free(buffer);   
         free(ref_delta_hash);
     }
+  close(connection->back_store);     
+  connection->back_store =  open(remote_files_tmp, O_RDONLY);
+  if (connection->back_store == -1)
+                err(EXIT_FAILURE,
+                 "open tmp ro:  failure %s",
+                 remote_files_tmp);
+
+  unlink(remote_files_tmp);   /* unlink now / dealocate when exit */
 }
 
 
@@ -2029,7 +2084,7 @@
         if ((merge_buffer = (char *)malloc(base->buffer_size)) == NULL)
             err(EXIT_FAILURE,
                 "apply_deltas: malloc");
-
+        load_buffer(connection,base);       
         memcpy(merge_buffer, base->buffer, base->buffer_size);
         merge_buffer_size = base->buffer_size;
 
@@ -2037,6 +2092,7 @@
 
         for (x = delta_count - 1; x >= 0; x--) {
             delta         = connection->object[deltas[x]];
+            load_buffer(connection,delta);   
             position      = 0;
             new_position  = 0;
             old_file_size = unpack_variable_length_integer(delta->buffer, &position);
@@ -2101,10 +2157,11 @@
              */
 
             memcpy(merge_buffer, layer_buffer, new_file_size);
+            release_buffer(delta);
         }
 
         /* Store the completed object. */
-
+        release_buffer(base);
         store_object(connection,
             base->type,
             merge_buffer,
@@ -2175,7 +2232,7 @@
             object.hash);
 
     /* Remove the base path from the list of upcoming deletions. */
-
+        load_buffer(connection,tree);
     file.path  = base_path;
     found_file = RB_FIND(Tree_Local_Path, &Local_Path, &file);
 
@@ -2291,7 +2348,7 @@
     }
 
     /* Add the tree data to the remote files list. */
-
+    release_buffer(tree);
     write(remote_descriptor, buffer, buffer_size);
     write(remote_descriptor, "\n", 1);
 
@@ -2346,6 +2403,7 @@
              */
 
             if (missing == false) {
+                load_buffer(connection,found_object);
                 check_hash = calculate_file_hash(
                     found_file->path,
                     found_file->mode);
@@ -2354,19 +2412,20 @@
                     found_object->buffer,
                     found_object->buffer_size,
                     3);
-
+                release_buffer(found_object);   
                 if (strncmp(check_hash, buffer_hash, 40) == 0)
                     update = false;
             }
 
             if (update == true) {
+                    load_buffer(connection,found_object);
                 save_file(found_file->path,
                     found_file->mode,
                     found_object->buffer,
                     found_object->buffer_size,
                     connection->verbosity,
                     connection->display_depth);
-
+                release_buffer(found_object);   
                 if (strstr(found_file->path, "UPDATING"))
                     extend_updating_list(connection,
                         found_file->path);
@@ -2409,13 +2468,14 @@
             "save_objects: cannot find %s",
             connection->want);
 
+    load_buffer(connection,found_object);                   
     if (memcmp(found_object->buffer, "tree ", 5) != 0)
         errc(EXIT_FAILURE, EINVAL,
             "save_objects: first object is not a commit");
 
     memcpy(tree, found_object->buffer + 5, 40);
     tree[40] = '\0';
-
+    release_buffer(found_object);
     /* Recursively start processing the tree. */
 
     snprintf(remote_files_new, BUFFER_UNIT_SMALL,
@@ -2460,14 +2520,14 @@
             errc(EXIT_FAILURE, EINVAL,
                 "save_objects: cannot find %s",
                 found_file->hash);
-
+        load_buffer(connection,found_object);       
         save_file(found_file->path,
             found_file->mode,
             found_object->buffer,
             found_object->buffer_size,
             connection->verbosity,
             connection->display_depth);
-
+                release_buffer(found_object);
         if (strstr(found_file->path, "UPDATING"))
             extend_updating_list(connection, found_file->path);
     }
 
the below patch makes a clone command for ports to use 300mb less ram
creates a backing file during unpack_objects() and only keeps the offset in the file in the stored object. (object->buffer is kept null)
when the object buffer is needed it is loaded from the backing file (/var/db/gitup/$repo.tmp)
Code:
--- /usr/ports/net/gitup/work/gitup-0.93/gitup.c    2021-05-09 07:57:39.000000000 +0300
+++ gitup.c    2021-05-10 19:12:50.942962000 +0300
@@ -70,7 +70,8 @@
     char     *ref_delta_hash;
     uint32_t  pack_offset;
     char     *buffer;
-    uint32_t  buffer_size;
+    uint32_t  buffer_size,file_offset;
+    char   can_free;
};

struct file_node {
@@ -119,6 +120,7 @@
     int                  verbosity;
     uint8_t              display_depth;
     char                *updating;
+    int                back_store;
} connector;

static void     append(char **, unsigned int *, const char *, size_t);
@@ -165,7 +167,32 @@
static void     unpack_objects(connector *);
static uint32_t unpack_variable_length_integer(char *, uint32_t *);
static void     usage(const char *);
+static void     load_buffer(connector *,struct object_node *);
+static void    release_buffer(struct object_node *);

+static void release_buffer(struct object_node *obj)
+{
+if(!obj->can_free) {
+ // dont release non file backed objects
+ free(obj->buffer);
+ obj->buffer = NULL;
+ }
+}
+
+static void load_buffer(connector * connection,struct object_node *obj)
+{
+ int rd;
+ if(!obj->buffer) {
+  obj->buffer = malloc(obj->buffer_size);
+  if(!obj->buffer)
+   err(EXIT_FAILURE, "load_buffer: malloc");
+  lseek(connection->back_store,obj->file_offset,SEEK_SET);
+  rd = read(connection->back_store,obj->buffer,obj->buffer_size);
+  if(rd != (int)obj->buffer_size) {
+   err(EXIT_FAILURE, "load_buffer: read %d %d",rd,obj->buffer_size);
+   }
+  }
+ }
/*
  * node_compare
  *
@@ -1734,7 +1761,6 @@
     char               *hash = NULL;

     hash = calculate_object_hash(buffer, buffer_size, type);
-
     /* Check to make sure the object doesn't already exist. */

     find.hash = hash;
@@ -1762,6 +1788,8 @@
         object->ref_delta_hash = (ref_delta_hash ? legible_hash(ref_delta_hash) : NULL);
         object->buffer         = buffer;
         object->buffer_size    = buffer_size;
+                object->can_free       = 1;
+                object->file_offset    = -1;
       
         if (connection->verbosity > 1)
             fprintf(stdout,
@@ -1798,9 +1826,20 @@
     uint32_t       file_size = 0, file_bits = 0, pack_offset = 0;
     uint32_t       lookup_offset = 0, position = 4;
     unsigned char  zlib_out[16384];
+        int nobj_old,tot_len = 0;
+        char remote_files_tmp[BUFFER_UNIT_SMALL];

     /* Check the pack version number. */
+        snprintf(remote_files_tmp, BUFFER_UNIT_SMALL,
+                "%s.tmp",
+                connection->remote_files);
+        connection->back_store = open(remote_files_tmp, O_WRONLY | O_CREAT | O_TRUNC);

+        if (connection->back_store == -1)
+                err(EXIT_FAILURE,
+                        "save_tmp: write file failure %s",
+                        remote_files_tmp);                  
+
     version   = (unsigned char)connection->response[position + 3];
     position += 4;

@@ -1904,6 +1943,8 @@

         inflateEnd(&stream);
         position += stream.total_in;
+                write(connection->back_store,buffer,buffer_size);
+                nobj_old = connection->objects;
       
         store_object(connection,
             object_type,
@@ -1912,9 +1953,23 @@
             pack_offset,
             index_delta,
             ref_delta_hash);
-
+                if(nobj_old != connection->objects) {
+                 connection->object[nobj_old]->buffer = NULL;
+                 connection->object[nobj_old]->can_free = 0;
+                 connection->object[nobj_old]->file_offset = tot_len;
+                 }
+                tot_len += buffer_size;
+            free(buffer);  
         free(ref_delta_hash);
     }
+  close(connection->back_store);    
+  connection->back_store =  open(remote_files_tmp, O_RDONLY);
+  if (connection->back_store == -1)
+                err(EXIT_FAILURE,
+                 "open tmp ro:  failure %s",
+                 remote_files_tmp);
+
+  unlink(remote_files_tmp);   /* unlink now / dealocate when exit */
}


@@ -2029,7 +2084,7 @@
         if ((merge_buffer = (char *)malloc(base->buffer_size)) == NULL)
             err(EXIT_FAILURE,
                 "apply_deltas: malloc");
-
+        load_buffer(connection,base);      
         memcpy(merge_buffer, base->buffer, base->buffer_size);
         merge_buffer_size = base->buffer_size;

@@ -2037,6 +2092,7 @@

         for (x = delta_count - 1; x >= 0; x--) {
             delta         = connection->object[deltas[x]];
+            load_buffer(connection,delta);  
             position      = 0;
             new_position  = 0;
             old_file_size = unpack_variable_length_integer(delta->buffer, &position);
@@ -2101,10 +2157,11 @@
              */

             memcpy(merge_buffer, layer_buffer, new_file_size);
+            release_buffer(delta);
         }

         /* Store the completed object. */
-
+        release_buffer(base);
         store_object(connection,
             base->type,
             merge_buffer,
@@ -2175,7 +2232,7 @@
             object.hash);

     /* Remove the base path from the list of upcoming deletions. */
-
+        load_buffer(connection,tree);
     file.path  = base_path;
     found_file = RB_FIND(Tree_Local_Path, &Local_Path, &file);

@@ -2291,7 +2348,7 @@
     }

     /* Add the tree data to the remote files list. */
-
+    release_buffer(tree);
     write(remote_descriptor, buffer, buffer_size);
     write(remote_descriptor, "\n", 1);

@@ -2346,6 +2403,7 @@
              */

             if (missing == false) {
+                load_buffer(connection,found_object);
                 check_hash = calculate_file_hash(
                     found_file->path,
                     found_file->mode);
@@ -2354,19 +2412,20 @@
                     found_object->buffer,
                     found_object->buffer_size,
                     3);
-
+                release_buffer(found_object);  
                 if (strncmp(check_hash, buffer_hash, 40) == 0)
                     update = false;
             }

             if (update == true) {
+                    load_buffer(connection,found_object);
                 save_file(found_file->path,
                     found_file->mode,
                     found_object->buffer,
                     found_object->buffer_size,
                     connection->verbosity,
                     connection->display_depth);
-
+                release_buffer(found_object);  
                 if (strstr(found_file->path, "UPDATING"))
                     extend_updating_list(connection,
                         found_file->path);
@@ -2409,13 +2468,14 @@
             "save_objects: cannot find %s",
             connection->want);

+    load_buffer(connection,found_object);                  
     if (memcmp(found_object->buffer, "tree ", 5) != 0)
         errc(EXIT_FAILURE, EINVAL,
             "save_objects: first object is not a commit");

     memcpy(tree, found_object->buffer + 5, 40);
     tree[40] = '\0';
-
+    release_buffer(found_object);
     /* Recursively start processing the tree. */

     snprintf(remote_files_new, BUFFER_UNIT_SMALL,
@@ -2460,14 +2520,14 @@
             errc(EXIT_FAILURE, EINVAL,
                 "save_objects: cannot find %s",
                 found_file->hash);
-
+        load_buffer(connection,found_object);      
         save_file(found_file->path,
             found_file->mode,
             found_object->buffer,
             found_object->buffer_size,
             connection->verbosity,
             connection->display_depth);
-
+                release_buffer(found_object);
         if (strstr(found_file->path, "UPDATING"))
             extend_updating_list(connection, found_file->path);
     }
Is this code in version 0.93 ? Or will be merged in a next release?
 
the below patch makes a clone command for ports to use 300mb less ram
creates a backing file during unpack_objects() and only keeps the offset in the file in the stored object. (object->buffer is kept null)
when the object buffer is needed it is loaded from the backing file (/var/db/gitup/$repo.tmp)
Code:
--- /usr/ports/net/gitup/work/gitup-0.93/gitup.c    2021-05-09 07:57:39.000000000 +0300
+++ gitup.c    2021-05-10 19:12:50.942962000 +0300
@@ -70,7 +70,8 @@
     char     *ref_delta_hash;
     uint32_t  pack_offset;
     char     *buffer;
-    uint32_t  buffer_size;
+    uint32_t  buffer_size,file_offset;
+    char   can_free;
};

struct file_node {
@@ -119,6 +120,7 @@
     int                  verbosity;
     uint8_t              display_depth;
     char                *updating;
+    int                back_store;
} connector;

static void     append(char **, unsigned int *, const char *, size_t);
@@ -165,7 +167,32 @@
static void     unpack_objects(connector *);
static uint32_t unpack_variable_length_integer(char *, uint32_t *);
static void     usage(const char *);
+static void     load_buffer(connector *,struct object_node *);
+static void    release_buffer(struct object_node *);

+static void release_buffer(struct object_node *obj)
+{
+if(!obj->can_free) {
+ // dont release non file backed objects
+ free(obj->buffer);
+ obj->buffer = NULL;
+ }
+}
+
+static void load_buffer(connector * connection,struct object_node *obj)
+{
+ int rd;
+ if(!obj->buffer) {
+  obj->buffer = malloc(obj->buffer_size);
+  if(!obj->buffer)
+   err(EXIT_FAILURE, "load_buffer: malloc");
+  lseek(connection->back_store,obj->file_offset,SEEK_SET);
+  rd = read(connection->back_store,obj->buffer,obj->buffer_size);
+  if(rd != (int)obj->buffer_size) {
+   err(EXIT_FAILURE, "load_buffer: read %d %d",rd,obj->buffer_size);
+   }
+  }
+ }
/*
  * node_compare
  *
@@ -1734,7 +1761,6 @@
     char               *hash = NULL;

     hash = calculate_object_hash(buffer, buffer_size, type);
-
     /* Check to make sure the object doesn't already exist. */

     find.hash = hash;
@@ -1762,6 +1788,8 @@
         object->ref_delta_hash = (ref_delta_hash ? legible_hash(ref_delta_hash) : NULL);
         object->buffer         = buffer;
         object->buffer_size    = buffer_size;
+                object->can_free       = 1;
+                object->file_offset    = -1;
       
         if (connection->verbosity > 1)
             fprintf(stdout,
@@ -1798,9 +1826,20 @@
     uint32_t       file_size = 0, file_bits = 0, pack_offset = 0;
     uint32_t       lookup_offset = 0, position = 4;
     unsigned char  zlib_out[16384];
+        int nobj_old,tot_len = 0;
+        char remote_files_tmp[BUFFER_UNIT_SMALL];

     /* Check the pack version number. */
+        snprintf(remote_files_tmp, BUFFER_UNIT_SMALL,
+                "%s.tmp",
+                connection->remote_files);
+        connection->back_store = open(remote_files_tmp, O_WRONLY | O_CREAT | O_TRUNC);

+        if (connection->back_store == -1)
+                err(EXIT_FAILURE,
+                        "save_tmp: write file failure %s",
+                        remote_files_tmp);                  
+
     version   = (unsigned char)connection->response[position + 3];
     position += 4;

@@ -1904,6 +1943,8 @@

         inflateEnd(&stream);
         position += stream.total_in;
+                write(connection->back_store,buffer,buffer_size);
+                nobj_old = connection->objects;
       
         store_object(connection,
             object_type,
@@ -1912,9 +1953,23 @@
             pack_offset,
             index_delta,
             ref_delta_hash);
-
+                if(nobj_old != connection->objects) {
+                 connection->object[nobj_old]->buffer = NULL;
+                 connection->object[nobj_old]->can_free = 0;
+                 connection->object[nobj_old]->file_offset = tot_len;
+                 }
+                tot_len += buffer_size;
+            free(buffer);  
         free(ref_delta_hash);
     }
+  close(connection->back_store);    
+  connection->back_store =  open(remote_files_tmp, O_RDONLY);
+  if (connection->back_store == -1)
+                err(EXIT_FAILURE,
+                 "open tmp ro:  failure %s",
+                 remote_files_tmp);
+
+  unlink(remote_files_tmp);   /* unlink now / dealocate when exit */
}


@@ -2029,7 +2084,7 @@
         if ((merge_buffer = (char *)malloc(base->buffer_size)) == NULL)
             err(EXIT_FAILURE,
                 "apply_deltas: malloc");
-
+        load_buffer(connection,base);      
         memcpy(merge_buffer, base->buffer, base->buffer_size);
         merge_buffer_size = base->buffer_size;

@@ -2037,6 +2092,7 @@

         for (x = delta_count - 1; x >= 0; x--) {
             delta         = connection->object[deltas[x]];
+            load_buffer(connection,delta);  
             position      = 0;
             new_position  = 0;
             old_file_size = unpack_variable_length_integer(delta->buffer, &position);
@@ -2101,10 +2157,11 @@
              */

             memcpy(merge_buffer, layer_buffer, new_file_size);
+            release_buffer(delta);
         }

         /* Store the completed object. */
-
+        release_buffer(base);
         store_object(connection,
             base->type,
             merge_buffer,
@@ -2175,7 +2232,7 @@
             object.hash);

     /* Remove the base path from the list of upcoming deletions. */
-
+        load_buffer(connection,tree);
     file.path  = base_path;
     found_file = RB_FIND(Tree_Local_Path, &Local_Path, &file);

@@ -2291,7 +2348,7 @@
     }

     /* Add the tree data to the remote files list. */
-
+    release_buffer(tree);
     write(remote_descriptor, buffer, buffer_size);
     write(remote_descriptor, "\n", 1);

@@ -2346,6 +2403,7 @@
              */

             if (missing == false) {
+                load_buffer(connection,found_object);
                 check_hash = calculate_file_hash(
                     found_file->path,
                     found_file->mode);
@@ -2354,19 +2412,20 @@
                     found_object->buffer,
                     found_object->buffer_size,
                     3);
-
+                release_buffer(found_object);  
                 if (strncmp(check_hash, buffer_hash, 40) == 0)
                     update = false;
             }

             if (update == true) {
+                    load_buffer(connection,found_object);
                 save_file(found_file->path,
                     found_file->mode,
                     found_object->buffer,
                     found_object->buffer_size,
                     connection->verbosity,
                     connection->display_depth);
-
+                release_buffer(found_object);  
                 if (strstr(found_file->path, "UPDATING"))
                     extend_updating_list(connection,
                         found_file->path);
@@ -2409,13 +2468,14 @@
             "save_objects: cannot find %s",
             connection->want);

+    load_buffer(connection,found_object);                  
     if (memcmp(found_object->buffer, "tree ", 5) != 0)
         errc(EXIT_FAILURE, EINVAL,
             "save_objects: first object is not a commit");

     memcpy(tree, found_object->buffer + 5, 40);
     tree[40] = '\0';
-
+    release_buffer(found_object);
     /* Recursively start processing the tree. */

     snprintf(remote_files_new, BUFFER_UNIT_SMALL,
@@ -2460,14 +2520,14 @@
             errc(EXIT_FAILURE, EINVAL,
                 "save_objects: cannot find %s",
                 found_file->hash);
-
+        load_buffer(connection,found_object);      
         save_file(found_file->path,
             found_file->mode,
             found_object->buffer,
             found_object->buffer_size,
             connection->verbosity,
             connection->display_depth);
-
+                release_buffer(found_object);
         if (strstr(found_file->path, "UPDATING"))
             extend_updating_list(connection, found_file->path);
     }
I just committed this. Thank you very much!
 
Is there a way to use a specific source IP Address in gitup (something like environment var FETCH_BIND_ADDRESS in fetch or -S option in ping) ?
 
@OP: Can you make gitup watch a specific group of ports like KDE? Right now, upgrading in-place is a major pain. The ports system has at least a dozen directories/categories where ports kf5-* ports are hiding, and just as many directories/categories where plasma5-* ports are distributed. One idea that I have is doing that as an option for gitup.conf file...
 
Back
Top