Up to [cvs.NetBSD.org] / src / bin / sh
Request diff between arbitrary revisions
Keyword substitution: kv
Default branch: MAIN
This commit is intended to be what was intended to happen in the commit of Sun Nov 10 01:22:24 UTC 2024, see: http://mail-index.netbsd.org/source-changes/2024/11/10/msg154310.html The commit message for that applies to this one (wholly). I believe that the problem with that version which caused it to be reverted has been found and fixed in this version (a necessary change was made as part of one of the fixes, but the side-effect implications of that were missed -- bad bad me.) In addition, I found some more issues with setting close-on-exec on other command lines With: func 3>whatever fd 3 (anything > 2) got close on exec set. That makes no difference to the function itself (nothing gets exec'd therefore nothing gets closed) but does to any exec that might happen running a command within the function. I believe that if this is done (just as if "func" was a regular command, and not a function) such open fds should be passed through to anything they exec - unless the function (or other command) takes care to close the fd passed to it, or explicitly turn on close-on exec. I expect this usage to be quite rare, and not make much practical difference. The same applies do builtin commands, but is even less relevant there, eg: printf 3>whatever would have set close-on-exec on fd 3 for printf. This is generally completely immaterial, as printf - and most other built-in commands - neither uses any fd other than (some of) 0 1 & 2, nor do they exec anything. That is, except for the "exec" built-in which was the focus of the original fix (mentioned above) and which should remain fixed here, and for the "." command. Because of that last one (".") close-on-exec should not be set on built-in commands (any of them) for redirections on the command line. This will almost never make a difference - any such redirections last only as long as the built-in command lasts (same with functions) and so will generally never care about the state of close-on-exec, and I have never seen a use of the "." command with any redirections other than stderr (which is unaffected here, fd's <= 2 never get close-on-exec set). That's probably why no-one ever noticed. There are still "fd issues" when running a (non #!) shell script, that are hard to fix, which we should probably handle the way most other shells have, by simply abandoning the optimisation of not exec'ing a whole new shell (#! scripts do that exec) and just doing it that way. Issues solved! One day.
Revert the recent change until I can work out how things are broken.
Back out the fcntl() 3rd arg -> (void *) change, leave it as an untouched int. The other way seems to break things.
exec builtin command redirection fixes Several changes, all related to the exec special built in command, or to close on exec, one way or another. (Except a few white space and comment additions, KNF, etc) 1. The bug found by Edgar Fuß reported in: http://mail-index.netbsd.org/tech-userlevel/2024/11/05/msg014588.html has been fixed, now "exec N>whatever" will set close-on-exec for fd N (as do ksh versions, and allowed by POSIX, though other shells do not) which has happened now for many years. But "exec cmd N>whatever" (which looks like the same command when redirections are processed) which was setting close-on-exec on N, now no longer does, so fd N can be passed to cmd as an open fd. For anyone who cares, the big block of change just after "case CMDBUILTIN:" in evalcommand() in eval.c is the fix for this (one line replaced by about 90 ... though most of that is comments or #if 0'd example code for later). It is a bit ugly, and will get worse if our exec command ever gets any options, as others have, but it does work. 2. when the exec builtin utility is used to alter the shell's redirections it is now "all or nothing". Previously the redirections were executed left to right. If one failed, no more were attempted, but the earlier ones remained. This makes no practical difference to a non-interactive shell, as a redirection error causes that shell to exit, but it makes a difference to interactive shells. Now if a redirection fails, any earlier ones which had been performed are undone. Note however that side-effects of redirections (like creating, or truncating, files in the filesystem, cannot be reversed - just the shell's file descriptors returned to how they were before the error). Similarly usage errors on exec now exist .. our exec takes no options (but does handle "--" as POSIX says it must - has done for ages). Until now, that was the only magic piece of exec, running exec -a name somecommand (which several other shells support) would attempt to exec the "-a" command, and most likely fail, causing immediate exit from the shell. Now that is a usage error - a non-interactive shell still exits, as exec is a special builtin, and any error from a special builtin causes a non-interactive shell to exit. But now, an interactive shell will no longer exit (and any redirections that were on the command will be undone, the same as for a redirection error). 3. When a "close on exec" file descriptor is temporarily saved, so the same fd can be redirected for another command (only built-in commands and functions matter, redirects for file system commands happen after a fork() and at that stage if anything goes wrong, the child simply exits - but for non-forking commands, doing something like printf >file required the previous stdout to be saved elsewhere, "file" opened to be the new stdout, then when printf is finished, the old stdout moved back. Anyway, if the fd being moved had close on exec set, then when it was moved back, the close on exec was lost. That is now fixed. 4. The fdflags command no longer allows setting close on exec on stdin, stdout, or stderr - POSIX requires that those 3 fd's always be open (to something) when any normal command is invoked. With close-on-exec set on one of these, that is impossible, so simply refuse it (when "exec N>file" sets close on exec, it only does it for N>2). Minor changes (should be invisible) a. The shell now keeps track of the highest fd number it sees doing normal operations (there are a few internal pipe() calls that aren't monitored and a couple of others, but in general the shell will now know the highest fd it ever saw allocated to it). This is mostly for debugging. b. calls to fcntl() passing an int as the "arg" are now all properly cast to the void * that the fcntl kernel is expecting to receive. I suspect that makes no actual difference to anything, but ...
PR bin/53550 Here we go again... One more time to redo how here docs are processed (it has been a few years since the last time!) This is actually a relatively minor change, mostly to timimg (to just when things happen). Now here docs are expanded at the same time the "filename" word in a redirect is expanded, rather than later when the heredoc was being sent to its process. This actually makes things more consistent - but does break one of the ATF tests which was testing that we were (effectively) internally inconsistent in this area. Not all shells agree on the context in which redirection expansions should happen, some make any side effects visible to the parent shell (the majority do) others do the redirection expansions in a subshell so any side effcts are lost. We used to have a foot in each camp, with the majority for everything but here docs, and the minority for here docs. Now we're all the way with LBJ ... (or something like that).
Detect write errors to stdout, and exit(1) from some built-in commands which (primarily) are used just to generate output (or with a particular option combination do so).
DEBUG mode changes only. NFC (NC) for any normally compiled shell. Mostly adding DEBUG mode tracing (when appropriate verbose tracing is enabled generally) whenever a shell (including sushell) process exits, so shells that the tracing should indicate why ehslls that vanish did that. Note for future investigators: if the relevant tracing is enabled, and a (sub-)shell still simply seems to have vanished without trace, the likely cause is that it was killed by a signal - and of those, the most common that occurs is SIGPIPE.
Fix an ordering error in the previous (and even earlier, going back a way, but made more serious with the recent changes). The n>&n operation (more or less a no-op, except it clears CLOEXEC) should precede almost everything else - and simply be made to fail if an attempt is made to apply it to a sh internal fd. We were renumbering the internal fd (the n> part considered first) which was dumb, but OK, before, but now rejecting the operation (the >&n) part when n should not be visible to the script. That made something of a mess (and could lead to the shell believing its job control tty was at a fd it never got moved to). Do things in the correct order, and simply fail that case for internal fds (for every other n>xxx for any xxx sh simply renumbers its internal fd n to some other fd before attempting the operation, even n>&- ... those are all fine). [In all the above the '>' is used in place of any redirect operator].
Improve the solution for the 2nd access to a fd which shouldn't be available ("13") issue reported by Jan Schaumann on netbsd-users. This fixes a bug in the earlier fix (a day or so ago) which could allow the shell's idea of which fd range was in use by the script to get wildly incorrect, but now also actually looks to see which fd's are in use as renamed other user fd's during the lifetime of a redirection which needs to be able to be undone (most redirections occur after a fork and are permanent in the child process). Attempting to access such a fd (as with attempts to access a fd in use by the shell for its own purposes) is treated as an attempt to access a closed fd (EBADF). Attempting to reuse the fd for some other purpose (which should be rare, even for scripts attempting to cause problems, since the shell generally knows which fds the script wants to use, and avoids them) will cause the renamed (renumbered) fd to be renamed again (moved aside to some other available fd), just as happens with the shell's private fds. Also, when a generic fd is required, don't give up because of EMFILE or similar unless there are no available fds at all (we might prefer >10 or bigger, but if there are none there, use anything). This avoids redirection errors when ulimit -n has been set small, and all the fds >10 that are available have been used, but we need somewhere to park the old user of a fd while we reuse that fd for the redirection.
Deal with some issues where fds intended only for internal use by the shell were available for manipulation by scripts (or the user). These issues were reported by Jan Schaumann on netbsd-users. The first allows the user to reference sh internal fds, and is a simple fix - any sh internal fd is simply treated as if it were closed when referenced by the script. These fds can be discovered by examining /proc/N/fd so it is not difficult for a script to discover which fd it should attempt to access. The second allows the user to reference a user level fd which is one that is normally available to it, but at a point where it should no longer be visible (when that fd has been redirected, for a built in command, so the original fd needs to be saved so it can be restored, the saving fd should not be accessible). It is not as easy for the script to determine which fd to attempt here, as the relevant one exists only during the lifetime of a built-in command (and similar), but there are ways in some cases (aside from looking at /proc from another process). Fix this one by watching which fds the user script is attempting to use, and avoid using those as temporary fds. This is possible in this case as we know what command is being run, before we need to save the fds it uses. That's different from the earlier case where when the shell allocates its fds we have no idea what it might reference later. Also clean up a couple of other minor code issues (NFC intended) that I noticed while here...
Ooops, restore accidently removed files from merge mishap
Sync with HEAD
Sync with HEAD
The previous commit was obviously made by a broken mindless automoton with an IQ that underflows when one attempts to enter it as an unnormalised 160 bit long long double... Whoever would believe that (~0 & anything) was a meaningful thing to write? And three times in one #define. That could not possibly have been me, could it? Simplify, simplify, simplify. NFC.
Inspired by (really the need for) Maya's patch to pkgsrc/shells/bash to allow bash to build fdflags on Solaris 10, here are some mods that fix that, and some other similar issues in the NetBSD version of fdflags. The bash implementation of fdflags is based upon the one Christos did for the NetBSD sh, so the issues are similar ... the NetBSD sh cannot yet (easily anyway) build on anything except NetBSD, so this change makes no current difference at all (just adds some compile time tests (#ifdef) which always work out the way things did before, when built on NetBSD). However, there is no system on which any modern shell can hope to work which does not support close on exec, or fcntl(F_SETFD,...) to set it. The O_CLOEXEC and FD_CLOEXEC definitions might not exist, but close on exec can still be manipulated. Since the primary rationale for the fdflags builtin was to be able to manipulate that state bit from scripts, it would be annoying to lose that one, and keep all the (less important) others, just because O_CLOEXEC is not defined, so do the fix (workaround) a different way than was done in the bash patch. Further, more than fdflags() will fail if O_CLOEXEC is not defined, so handle that as well. Also fix another oddity ... (noticed by reading the code) - if fcntl(F_GETFL,...) returned any bits set that we don't understand, the code was supposed to simply print their values as a hex constant, when fdflags is run with -v. However, the getflags() function was clearing all bits that the code did not know about ... so there is no way any unknown bit could ever make it out to be printed. Handle that a different way - instead of clearing unknown bits, clear any bits that get returned which we understand, but do not want to deal with (stuff like O_WRONLY, which should not be returned from the fcntl(), but who knows...) Leave any unknown bits that happen to be set, set, so that printone() can display them if appropriate. (This is most likely to happen when running an older shell on a new kernel where the kernel supports some new flag that the shell has not been taught to understand). NFCI that anyone should notice anytime soon.
DEBUG mode change only. Add one extra trace point. NFC for normal builds.
INTON / INTOFF audit and cleanup. No visible differences expected - there is a remote chance that some internal lossage may no longer occur in interactive shells that receive SIGINT (untrapped) at inopportune times, but you would have had to have been very unlucky to have ever suffered from that.
Sync with HEAD, resolve a few conflicts
Pull up following revision(s) (requested by kre in ticket #1125): bin/sh/redir.c: revision 1.61 Fix the <> redirection operator, which has been broken since it was first implemented in response to PR bin/4966 (PR Feb 1998, fix Feb 1999). The file named should not be truncated. No other shell truncates the file (<> was added to FreeBSD sh in Oct 2000, and did not include O_TRUNC) and POSIX certainly does not suggest that should happen (just that the file is to be created if it does not exist.) Bug pointed out in off-list e-mail by Martijn Dekker
Fix typo: O_ALTIO -> O_ALT_IO Noted by @jbeich via GitHub.
Sync with HEAD, resolve a couple of conflicts
Fix the <> redirection operator, which has been broken since it was first implemented in response to PR bin/4966 (PR Feb 1998, fix Feb 1999). The file named should not be truncated. No other shell truncates the file (<> was added to FreeBSD sh in Oct 2000, and did not include O_TRUNC) and POSIX certainly does not suggest that should happen (just that the file is to be created if it does not exist.) Bug pointed out in off-list e-mail by Martijn Dekker
Sync with HEAD Resolve a couple of conflicts (result of the uimin/uimax changes)
NFC: DEBUG (compile time) mode only change: Add some extra redirection (fd manipulation) tracing. While here, some white space fixes, and very minor KNF.
DEBUG mode only change. Add some tracing. NFC (without DEBUG).
Pull up following revision(s) (requested by kre in ticket #103): bin/kill/kill.c: 1.28 bin/sh/Makefile: 1.111-1.113 bin/sh/arith_token.c: 1.5 bin/sh/arith_tokens.h: 1.2 bin/sh/arithmetic.c: 1.3 bin/sh/arithmetic.h: 1.2 bin/sh/bltin/bltin.h: 1.15 bin/sh/cd.c: 1.49-1.50 bin/sh/error.c: 1.40 bin/sh/eval.c: 1.142-1.151 bin/sh/exec.c: 1.49-1.51 bin/sh/exec.h: 1.26 bin/sh/expand.c: 1.113-1.119 bin/sh/expand.h: 1.23 bin/sh/histedit.c: 1.49-1.52 bin/sh/input.c: 1.57-1.60 bin/sh/input.h: 1.19-1.20 bin/sh/jobs.c: 1.86-1.87 bin/sh/main.c: 1.71-1.72 bin/sh/memalloc.c: 1.30 bin/sh/memalloc.h: 1.17 bin/sh/mknodenames.sh: 1.4 bin/sh/mkoptions.sh: 1.3-1.4 bin/sh/myhistedit.h: 1.12-1.13 bin/sh/nodetypes: 1.16-1.18 bin/sh/option.list: 1.3-1.5 bin/sh/parser.c: 1.133-1.141 bin/sh/parser.h: 1.22-1.23 bin/sh/redir.c: 1.58 bin/sh/redir.h: 1.24 bin/sh/sh.1: 1.149-1.159 bin/sh/shell.h: 1.24 bin/sh/show.c: 1.43-1.47 bin/sh/show.h: 1.11 bin/sh/syntax.c: 1.4 bin/sh/syntax.h: 1.8 bin/sh/trap.c: 1.41 bin/sh/var.c: 1.56-1.65 bin/sh/var.h: 1.29-1.35 An initial attempt at implementing LINENO to meet the specs. Aside from one problem (not too hard to fix if it was ever needed) this version does about as well as most other shell implementations when expanding $((LINENO)) and better for ${LINENO} as it retains the "LINENO hack" for the latter, and that is very accurate. Unfortunately that means that ${LINENO} and $((LINENO)) do not always produce the same value when used on the same line (a defect that other shells do not share - aside from the FreeBSD sh as it is today, where only the LINENO hack exists and so (like for us before this commit) $((LINENO)) is always either 0, or at least whatever value was last set, perhaps by LINENO=${LINENO} which does actually work ... for that one line...) This could be corrected by simply removing the LINENO hack (look for the string LINENO in parser.c) in which case ${LINENO} and $((LINENO)) would give the same (not perfectly accurate) values, as do most other shells. POSIX requires that LINENO be set before each command, and this implementation does that fairly literally - except that we only bother before the commands which actually expand words (for, case and simple commands). Unfortunately this forgot that expansions also occur in redirects, and the other compound commands can also have redirects, so if a redirect on one of the other compound commands wants to use the value of $((LINENO)) as a part of a generated file name, then it will get an incorrect value. This is the "one problem" above. (Because the LINENO hack is still enabled, using ${LINENO} works.) This could be fixed, but as this version of the LINENO implementation is just for reference purposes (it will be superseded within minutes by a better one) I won't bother. However should anyone else decide that this is a better choice (it is probably a smaller implementation, in terms of code & data space then the replacement, but also I would expect, slower, and definitely less accurate) this defect is something to bear in mind, and fix. This version retains the *BSD historical practice that line numbers in functions (all functions) count from 1 from the start of the function, and elsewhere, start from 1 from where the shell started reading the input file/stream in question. In an "eval" expression the line number starts at the line of the "eval" (and then increases if the input is a multi-line string). Note: this version is not documented (beyond as much as LINENO was before) hence this slightly longer than usual commit message. A better LINENO implementation. This version deletes (well, #if 0's out) the LINENO hack, and uses the LINENO var for both ${LINENO} and $((LINENO)). (Code to invert the LINENO hack when required, like when de-compiling the execution tree to provide the "jobs" command strings, is still included, that can be deleted when the LINENO hack is completely removed - look for refs to VSLINENO throughout the code. The var funclinno in parser.c can also be removed, it is used only for the LINENO hack.) This version produces accurate results: $((LINENO)) was made as accurate as the LINENO hack made ${LINENO} which is very good. That's why the LINENO hack is not yet completely removed, so it can be easily re-enabled. If you can tell the difference when it is in use, or not in use, then something has broken (or I managed to miss a case somewhere.) The way that LINENO works is documented in its own (new) section in the man page, so nothing more about that, or the new options, etc, here. This version introduces the possibility of having a "reference" function associated with a variable, which gets called whenever the value of the variable is required (that's what implements LINENO). There is just one function pointer however, so any particular variable gets at most one of the set function (as used for PATH, etc) or the reference function. The VFUNCREF bit in the var flags indicates which func the variable in question uses (if any - the func ptr, as before, can be NULL). I would not call the results of this perfect yet, but it is close. Unbreak (at least) i386 build .... I have no idea why this built for me on amd64 (problem was missing prototype for snprintf witout <stdio.h>) While here, add some (DEBUG mode only) tracing that proved useful in solving another problem. Set the line number before expanding args, not after. As the line_number would have usually been set earlier, this change is mostly an effective no-op, but it is better this way (just in case) - not observed to have caused any problems. Undo some over agressive fixes for a (pre-commit) bug that did not need these changes to be fixed - and these cause problems in another absurd use case. Either of these issues is unlikely to be seen by anyone who isn't an idiot masochist... PR bin/52280 removescapes_nl in expari() even when not quoted, CRTNONL's appear regardless of quoting (unlike CTLESC). New sentence, new line. Whitespace. Improve the (new) LINENO section, markup changes (with thanks to wiz@ for assistace) and some better wording in a few placed. I am an idiot... revert the previous unintended commit. Remove some left over baggage from the LINENO v1 implementation that didn't get removed with v2, and should have. This would have had (I think, without having tested it) one very minor effect on the way LINENO worked in the v2 implementation, but my guess is it would have taken a long time before anyone noticed... Correct spelling in comments of DEBUG only code... (Perhaps) temporary fix to pkgtools (cwrappers) build (configure). Expanding `` containing \ \n sequences looks to have been giving problems. I don't think this is the correct fix, but it will do no worse harm than (perhaps) incorrectly calculating LINENO in this kind of (rare) circumstance. I'll look and see if there should be a better fix later. s/volatile/const/ -- wonderful how opposites attract like this. NFC (normal use) - DEBUG only change, when showing empty arg list don't omit terminating \n. Free stack memory in a couple of obscure cases where it wasn't being done (one in probably dead code that is never compiled, the other in a very rare error case.) Since it is stack memory it wasn't lost in any case, just held longer than needed. Many internal memory management type fixes. PR bin/52302 (core dump with interactive shell, here doc and error on same line) is fixed. (An old bug.) echo "$( echo x; for a in $( seq 1000 ); do printf '%s\n'; done; echo y )" consistently prints 1002 lines (x, 1000 empty ones, then y) as it should (And you don't want to know what it did before, or why.) (Another old one.) (Recently added) Problems with ~ expansion fixed (mem management related). Proper fix for the cwrappers configure problem (which includes the quick fix that was done earlier, but extends upon that to be correct). (This was another newly added problem.) And the really devious (and rare) old bug - if STACKSTRNUL() needs to allocate a new buffer in which to store the \0, calculate the size of the string space remaining correctly, unlike when SPUTC() grows the buffer, there is no actual data being stored in the STACKSTRNUL() case - the string space remaining was calculated as one byte too few. That would be harmless, unless the next buffer also filled, in which case it was assumed that it was really full, not one byte less, meaning one junk char (a nul, or anything) was being copied into the next (even bigger buffer) corrupting the data. Consistent use of stalloc() to allocate a new block of (stack) memory, and grabstackstr() to claim a block of (stack) memory that had already been occupied but not claimed as in use. Since grabstackstr is implemented as just a call to stalloc() this is a no-op change in practice, but makes it much easier to comprehend what is really happening. Previous code sometimes used stalloc() when the use case was really for grabstackstr(). Change grabstackstr() to actually use the arg passed to it, instead of (not much better than) guessing how much space to claim, More care when using unstalloc()/ungrabstackstr() to return space, and in particular when the stack must be returned to its previous state, rather than just returning no-longer needed space, neither of those work. They also don't work properly if there have been (really, even might have been) any stack mem allocations since the last stalloc()/grabstackstr(). (If we know there cannot have been then the alloc/release sequence is kind of pointless.) To work correctly in general we must use setstackmark()/popstackmark() so do that when needed. Have those also save/restore the top of stack string space remaining. [Aside: for those reading this, the "stack" mentioned is not in any way related to the thing used for maintaining the C function call state, ie: the "stack segment" of the program, but the shell's internal memory management strategy.] More comments to better explain what is happening in some cases. Also cleaned up some hopelessly broken DEBUG mode data that were recently added (no effect on anyone but the poor semi-human attempting to make sense of it...). User visible changes: Proper counting of line numbers when a here document is delimited by a multi-line end-delimiter, as in cat << 'REALLY END' here doc line 1 here doc line 2 REALLY END (which is an obscure case, but nothing says should not work.) The \n in the end-delimiter of the here doc (the last one) was not incrementing the line number, which from that point on in the script would be 1 too low (or more, for end-delimiters with more than one \n in them.) With tilde expansion: unset HOME; echo ~ changed to return getpwuid(getuid())->pw_home instead of failing (returning ~) POSIX says this is unspecified, which makes it difficult for a script to compensate for being run without HOME set (as in env -i sh script), so while not able to be used portably, this seems like a useful extension (and is implemented the same way by some other shells). Further, with HOME=; printf %s ~ we now write nothing (which is required by POSIX - which requires ~ to expand to the value of $HOME if it is set) previously if $HOME (in this case) or a user's directory in the passwd file (for ~user) were a null STRING, We failed the ~ expansion and left behind '~' or '~user'. Changed the long name for the -L option from lineno_fn_relative to local_lineno as the latter seemed to be marginally more popular, and perhaps more importantly, is the same length as the peviously existing quietprofile option, which means the man page indentation for the list of options can return to (about) what it was before... (That is, less indented, which means more data/line, which means less lines of man page - a good thing!) Cosmetic changes to variable flags - make their values more suited to my delicate sensibilities... (NFC). Arrange not to barf (ever) if some turkey makes _ readonly. Do this by adding a VNOERROR flag that causes errors in var setting to be ignored (intended use is only for internal shell var setting, like of "_"). (nb: invalid var name errors ignore this flag, but those should never occur on a var set by the shell itself.) From FreeBSD: don't simply discard memory if a variable is not set for any reason (including because it is readonly) if the var's value had been malloc'd. Free it instead... NFC - DEBUG changes, update this to new TRACE method. KNF - white space and comment formatting. NFC - DEBUG mode only change - convert this to the new TRACE() format. NFC - DEBUG mode only change - complete a change made earlier (marking the line number when included in the trace line tag to show whether it comes from the parser, or the elsewhere as they tend to be quite different). Initially only one case was changed, while I pondered whether I liked it or not. Now it is all done... Also when there is a line tag at all, always include the root/sub-shell indicator character, not only when the pid is included. NFC: DEBUG related comment change - catch up with reality. NFC: DEBUG mode only change. Fix botched cleanup of one TRACE(). "b" more forgiving when sorting options to allow reasonable (and intended) flexibility in option.list format. Changes nothing for current option.list. Now that excessive use of STACKSTRNUL has served its purpose (well, accidental purpose) in exposing the bug in its implementation, go back to not using it when not needed for DEBUG TRACE purposes. This change should have no practical effect on either a DEBUG shell (where the STACKSTRNUL() calls remain) or a non DEBUG shell where they are not needed. Correct the initial line number used for processing -c arg strings. (It was inheriting the value from end of profile file processing) - I didn't notice before as I usually test with empty or no profile files to avoid complications. Trivial change which should have very limited impact. Fix from FreeBSD (applied there in July 2008...) Don't dump core with input like sh -c 'x=; echo >&$x' - that is where the word after a >& or <& redirect expands to nothing at all. Another fix from FreeBSD (this one from April 2009). When processing a string (as in eval, trap, or sh -c) don't allow trailing \n's to destroy the exit status of the last command executed. That is: sh -c 'false ' echo $? should produce 1, not 0. It is amazing what nonsense appears to work sometimes... (all my nonsense too!) Two bugs here, one benign because of the way the script is used. The other hidden by NetBSD's sort being stable, and the data not really requiring sorting at all... So as it happens these fixes change nothing, but they are needed anyway. (The contents of the generated file are only used in DEBUG shells, so this is really even less important than it seems.) Another ancient (highly improbable) bug bites the dust. This one caused by incorrect macro usage (ie: using the wrong one) which has been in the sources since version 1.1 (ie: forever). Like the previous (STACKSTRNUL) bug, the probability of this one actually occurring has been infinitesimal but the LINENO code increases that to infinitesimal and a smidgen... (or a few, depending upon usage). Still, apparently that was enough, Kamil Rytarowski discovered that the zsh configure script (damn competition!) managed to trigger this problem. source .editrc after we initialize so that commands persist! Make arg parsing in kill POSIX compatible with POSIX (XBD 2.12) by parsing the way getopt(3) would, if only it could handle the (required) -signumber and -signame options. This adds two "features" to kill, -ssigname and -lstatus now work (ie: one word with all of the '-', the option letter, and its value) and "--" also now works (kill -- -pid1 pid2 will not attempt to send the pid1 signal to pid2, but rather SIGTERM to the pid1 process group and pid2). It is still the case that (apart from --) at most 1 option is permitted (-l, -s, -signame, or -signumber.) Note that we now have an ambiguity, -sname might mean "-s name" or send the signal "sname" - if one of those turns out to be valid, that will be accepted, otherwise the error message will indicate that "sname" is not a valid signal name, not that "name" is not. Keeping the "-s" and signal name as separate words avoids this issue. Also caution: should someone be weird enough to define a new signal name (as in the part after SIG) which is almost the same name as an existing name that starts with 'S' by adding an extra 'S' prepended (eg: adding a SIGSSYS) then the ambiguity problem becomes much worse. In that case "kill -ssys" will be resolved in favour of the "-s" flag being used (the more modern syntax) and would send a SIGSYS, rather that a SIGSSYS. So don't do that. While here, switch to using signalname(3) (bye bye NSIG, et. al.), add some constipation, and show a little pride in formatting the signal names for "kill -l" (and in the usage when appropriate -- same routine.) Respect COLUMNS (POSIX XBD 8.3) as primary specification of the width (terminal width, not number of columns to print) for kill -l, a very small value for COLUMNS will cause kill -l output to list signals one per line, a very large value will cause them all to be listed on one line.) (eg: "COLUMNS=1 kill -l") TODO: the signal printing for "trap -l" and that for "kill -l" should be switched to use a common routine (for the sh builtin versions.) All changes of relevance here are to bin/kill - the (minor) changes to bin/sh are only to properly expose the builtin version of getenv(3) so the builtin version of kill can use it (ie: make its prototype available.) Properly support EDITRC - use it as (naming) the file when setting up libedit, and re-do the config whenever EDITRC is set. Get rid of workarounds for ancient groff html backend. Simplify macro usage. Make one example more like a real world possibility (it still isn't, but is closer) - though the actual content is irrelevant to the point being made. Add literal prompt support this allows one to do: CA="$(printf '\1')" PS1="${CA}$(tput bold)${CA}\$${CA}$(tput sgr0)${CA} " Now libedit supports embedded mode switch sequence, improve sh support for them (adds PSlit variable to set the magic character). NFC: DEBUG only change - provide an externally visible (to the DEBUG sh internals) interface to one of the internal (private to trace code) functions Include redirections in trace output from "set -x" Implement PS1, PS2 and PS4 expansions (variable expansions, arithmetic expansions, and if enabled by the promptcmds option, command substitutions.) Implement a bunch of new shell environment variables. many mostly useful in prompts when expanded at prompt time, but all available for general use. Many of the new ones are not available in SMALL shells (they work as normal if assigned, but the shell does not set or use them - and there is no magic in a SMALL shell (usually for install media.)) Omnibus manual update for prompt expansions and new variables. Throw in some random cleanups as a bonus. Correct a markup typo (why did I not see this before the prev commit??) Sort options (our default is 0..9AaBbZz). Fix markup problems and a typo. Make $- list flags in the same order they appear in sh(1) Do a better job of detecting the error in pkgsrc/devel/libbson-1.6.3's configure script, ie: $(( which is intended to be a sub-shell in a command substitution, but is an arith subst instead, it needs to be written $( ( to do as intended. Instead of just blindly carrying on to find the missing )) somewhere, anywhere, give up as soon as we have seen an unbalanced ')' that isn't immediately followed by another ')' which in a valid arith subst it always would be. While here, there has been a comment in the code for quite a while noting a difference in the standard between the text descr & grammar when it comes to the syntax of case statements. Add more comments to explain why parsing it as we do is in fact definitely the correct way (ie: the grammar wins arguments like this...). DEBUG and white space changes only. Convert TRACE() calls for DEBUg mode to the new style. NFC (when not debugging sh). Mostly DEBUG and white space changes. Convert DEEBUG TRACE() calls to the new format. Also #if 0 a function definition that is used nowhere. While here, change the function of pushfile() slightly - it now sets the buf pointer in the top (new) input descriptor to NULL, instead of simply leaving it - code that needs a buffer always (before and after) must malloc() one and assign it after the call. But code which does not (which will be reading from a string or similar) now does not have to explicitly set it to NULL (cleaner interface.) NFC intended (or observed.) DEBUG changes: convert DEBUG TRACE() calls to new format. ALso, cause exec failures to always cause the shell to exit with status 126 or 127, whatever the cause. 127 is intended for lookup failures (and is used that way), 126 is used for anything else that goes wrong (as in several other shells.) We no longer use 2 (more easily confused with an exit status of the command exec'd) for shell exec failures. DEBUG only changes. Convert the TRACE() calls in the remaining files that still used it to the new format. NFC. Fix a reference after free (and consequent nonsense diagnostic for attempts to set readonly variables) I added in 1.60 by incompletely copying the FreeBSD fix for the lost memory issue.
Include redirections in trace output from "set -x"
Now that the shell is protecting its internal fds properly, moving them whenever the user tries to step on one, we can change our behaviour back to what the kernel considers to be that of a well behaved shell (wrt file descriptor usage). If our user causes problems, we will soon move into recalcitrant process territory, but that should normally be rare. This should reduce kernel overheads a little.
Resolve conflicts from previous merge (all resulting from $NetBSD keywork expansion)
Allow abbreviations of option names for the "fdflags -s" command. While documenting that, cleanup markup of the fdflags section of the man page.
When opening a file descritor with "exec n>/file" (and similar) only set the close-on-exec bit when not in posix mode (to comply with posix...) OK: christos@
Sync with HEAD - tag prg-localcount2-base1
Keep track of which file descriptors the shell is using for its own purposes, and move them elsewhere whenever a user redirection happens to pick the same number. With this we can move the shell file descriptors back to lower values (be slightly kinder to the kernel) since we can no longer clash. (Also get rid of a little old unneeded code.) This also completes the fdflags command, which no longer permits access to (by way or either obtaining, or changing) the shell's internal fds.
Sync with HEAD
Keep track of the biggest fd used by, or available to, the user/script and use that to control which fd's are examined by a (bare) fdflags (with no fd args). Usually this will mean that fdflags will no longer show the shell's internal use fds, only user fds. This is only a partial fix however, a user can easily discover the shell's fd usage (eg: using fstat) and can then still use fdflags to manipulate those fds (or even send output to them). The shell needs to monitor its own fd usage better, and keep out of the way of user fds - coming sometime later...
When verifying the size of the fd arg for fdflags skip leading 0's (fdflags 0000000001 should work, fdflags 10000000 should not)
Sync with HEAD
Sync with HEAD
Fiddle the (new) fdflags implementation: Remove some unnecessary cuteness that limited error reporting. Permit just one -s arg to fdflags Be deterministic in the case of fdflags -s +cloexec,-cloexec 0 (and similar) - use the last specified, always. Allow: FD_0_FLAGS=$( fdflags -v 0 ) # do stuff, manipulating the flags fdflags -s "FD_0_FLAGS" 0 to save/restore flags for a fd. Correctly mask result of fcntl(fd, F_GETFD) with FD_CLOEXEC as the specs require before deciding close on exec is set. Improve portability as a tool, don't assume strtoi(), nor __arraycount() and avoid needlessly requiring recent C versions (ie: there's no need to sprinkle declarations in the middle of the code, it just makes them hard to find, and benefits nothing.) Still to do: As currently implemented, both user, and shell internal fds are reported, and can be manipulated. Allowing users to touch the shell's internal fds is bogus, and providing this easy way to allow users to discover which values they have is poor. Fixing this means getting rid of the use of fcntl(F_MAXFD) and replacing it with a shell maintained memory of what fds the user (script) has allocated. The shell's fd manipulation really still needs major work (including properly fixing bin/48875)
Add fdflags builtin. Discussed with Chet and he has implemented it for bash too.
Don't let set cloexec for "exec n>&1" like ksh does (but not bash). Unit tests pass... If we don't like that, we should add some unittests that fail.
add missing <sys/stat.h>
More work on file descriptors... This is the copyfd() cleanup. copyfd() duplicates file descriptors - it used to be widely used, but these days has seen its popularity dwindle. Strip it of an option that ceased to be variable (simplifying code) and cause all its users to check its result, so it does not need to handle errors itself (simplifying code further), and make it become a private inernal routine in redir.c (all callers from other places have switched to a more modern interface.) Make sure we error() if N>&N fails (if N is closed.)
Finish the fd reassignment fixes from 1.43 and 1.45 ... if we are moving a fd to an unspecified high fd number, we certainly do not want to hand that high fd off to other processes after an exec, so always set close-on-exec on the result (even if lack of fd's means no fd alteration happens.) This will (eventually) allow some other code that sets close-on-exec to be removed, but for now, doing it twice won't hurt. Also, in a N>&M type redirection, do not set close-on-exec if we don't want it. OK christos@
PR bin/51123 - make >&- work in all cases, and while doing that fix things so that >/dev/stdout </dev/stdin (etc) work as well (in all cases). ok christos@
Whitespace fixes. No functional change.
Fix handing of user file descriptors outside the 0..9 range. Also, move (most of) the shell's internal use fd's to much higher values (depending upon what ulimit -n allows) so they are less likely to clash with user supplied fd numbers. A future patch will (hopefully) avoid this problem completely by dynamically moving the shell's internal fds around as needed. (From kre@)
dedup.
Test for REDIR_KEEP in the non-copy case: $ cat other1 #!/bin/sh ./other2 3>out $ cat other2 #!/bin/sh echo other2 1>&3 $ ./other1
Don't close-on-exec redirections created explicitly for the command being ran; i.e. we want this to work: $ cat succ1 #!/bin/sh ./succ2 6>out $ cat succ2 #!/bin/sh echo succ2 >&6 $ ./succ1 And this to fail: $ cat fail1 #!/bin/sh exec 6> out echo "fail1" >&6 ./fail2 exec 6>&- $ cat fail2 #!obj.amd64/sh echo "fail2" >&6 $ ./fail1 ./fail2: 6: Bad file descriptor XXX: Do we want a -k (keep flag on exec to make redirections not close-on-exec?
PR/50619: Fix reversed test.
Don't leak redirected rescriptors to exec'ed processes. This is what ksh does, but bash does not. For example: $ cat test1 #!/bin/sh exec 6> out echo "test" >&6 sh ./test2 exec 6>&- $ cat test2 echo "test2" >&6 $ ./test1 ./test2: 6: Bad file descriptor This fixes by side effect the problem of the rc system leaking file descriptors 7 and 8 to all starting daemons: $ fstat -p 1359 USER CMD PID FD MOUNT INUM MODE SZ|DV R/W root powerd 1359 wd / 2 drwxr-xr-x 512 r root powerd 1359 0 / 63029 crw-rw-rw- null rw root powerd 1359 1 / 63029 crw-rw-rw- null rw root powerd 1359 2 / 63029 crw-rw-rw- null rw root powerd 1359 3* kqueue pending 0 root powerd 1359 4 / 64463 crw-r----- power r root powerd 1359 7 flags 0x80034<ISTTY,MPSAFE,LOCKSWORK,CLEAN> root powerd 1359 8 flags 0x80034<ISTTY,MPSAFE,LOCKSWORK,CLEAN> root powerd 1359 9* pipe 0xfffffe815d7bfdc0 -> 0x0 w Note fd=7,8 pointing to the revoked pty from the parent rc process.
simplify and eliminate TOCTOU.
PR/48201: Miwa Susumu: Fix set -C (no clobber) for POSIX; from FreeBSD Can't use O_EXCL because of device nodes; also truncate.
Rebase to HEAD as of a few days ago.
sync with head. for a reference, the tree before this commit was tagged as yamt-pagecache-tag8. this commit was splitted into small chunks to avoid a limitation of cvs. ("Protocol error: too many arguments")
fix descriptor leaks. PR/47805 this fix was taken from FreeBSD SVN rev 199953 (Jilles Tjoelker) ------------------------------------------------------------------------ r199953 | jilles | 2009-11-30 07:33:59 +0900 (Mon, 30 Nov 2009) | 16 lines Fix some cases where file descriptors from redirections leak to programs. - Redirecting fds that were not open before kept two copies of the redirected file. sh -c '{ :; } 7>/dev/null; fstat -p $$; true' (both fd 7 and 10 remained open) - File descriptors used to restore things after redirection were not set close-on-exec, instead they were explicitly closed before executing a program normally and before executing a shell procedure. The latter must remain but the former is replaced by close-on-exec. sh -c 'exec 7</; { exec fstat -p $$; } 7>/dev/null; true' (fd 10 remained open) The examples above are simpler than the testsuite because I do not want to use fstat or procstat in the testsuite.
resync from head
constify
revert the previous (wrong branch)
constify
sync with head
Use C89 function definitions
NULL does not need a cast
Sync with HEAD
Tell copyfd if the caller wants the exact tofd to just fd >= tofd. Fixes "echo foo > /rump/bar" in a rump hijacked shell. reviewed by christos
Pull up following revision(s) (requested by msaitoh in ticket #1281): bin/sh/redir.c: revision 1.30 Conform to XCU Section 2.8.2 (Exit Status for Commands)
Pull up following revision(s) (requested by msaitoh in ticket #1992): bin/sh/redir.c: revision 1.30 Conform to XCU Section 2.8.2 (Exit Status for Commands)
sync with HEAD
Conform to XCU Section 2.8.2 (Exit Status for Commands)
Pull up revision 1.29 (requested by chs in ticket #777): PR/25699: David Laight: sh(1) hangs opening a named pipe as stdin for background process This happens because we vfork, and then open a named pipe with O_RDONLY and block in the child. We avoid this, by opening the file with O_NONBLOCK, and then reset it if we are vforked. XXX: this is an ugly fix.
PR/25699: David Laight: sh(1) hangs opening a named pipe as stdin for background process This happens because we vfork, and then open a named pipe with O_RDONLY and block in the child. We avoid this, by opening the file with O_NONBLOCK, and then reset it if we are vforked. XXX: this is an ugly fix.
Move UCB-licensed code from 4-clause to 3-clause licence. Patches provided by Joel Baker in PR 22249, verified by myself.
Fixes from David Laight: - ansification - format of output of jobs command (etc) - job identiers %+, %- etc - $? and $(...) - correct quoting of output of set, export -p and readonly -p - differentiation between nornal and 'posix special' builtins - correct behaviour (posix) for errors on builtins and special builtins - builtin printf and kill - set -o debug (if compiled with DEBUG) - cd src obj (as ksh - too useful to do without) - unset -e name, remove non-readonly variable from export list. (so I could unset -e PS1 before running the test shell...)
Revert previous change. No need to save rootshell. It is only affecting the non-vfork case. Having said that, it would be nice if pipelines of simple commands were vforked too. Right now they are not. Explain that setpgid() might fail because we are doing it both in the parent and the child case, because we don't know which one will come first. Suspending a pipeline prints %1 Suspended n times where n is the number of processes, but that was there before. It is easy to fix, but I'll leave the code alone for now.
Deal with rootshell not being maintained correctly in the vfork() case. Propagate isroot, throughout the eval process and maintain it properly. Fixes sleep 10 | cat^C not exiting because sleep and cat ended up in their own process groups, because wasroot was always true in the children.
VFork()ing shell: From elric@netbsd.org: Plus my changes: - walking process group fix in foregrounding a job. - reset of process group in parent shell if interrupted before the wait. - move INTON lower in the dowait so that the job structure is consistent. - error check all setpgid(), tcsetpgrp() calls. - eliminate unneeded strpgid() call. - check that we don't belong in the process group before we try to set it.
implement noclobber. From Ben Harris, with minor tweaks from me. Two unimplemented comments to go. Go Ben!
Doing the vfork work on ash on a branch to try to shake out the problems before I expose everyone to them. This checkin represents a merge of the prior work, which I backed out a while ago, to the HEAD only and does not incorporate any additional bugfixes. The additional bugfixes and code-cleanup will occur in later checkins. For reference the patches that were used are: cvs diff -kk -r1.51 -r1.55 eval.c | patch cvs diff -kk -r1.27 -r1.28 exec.c | patch cvs diff -kk -r1.15 -r1.16 exec.h | patch cvs diff -kk -r1.32 -r1.33 input.c | patch cvs diff -kk -r1.10 -r1.11 input.h | patch cvs diff -kk -r1.32 -r1.35 jobs.c | patch cvs diff -kk -r1.9 -r1.11 jobs.h | patch cvs diff -kk -r1.36 -r1.37 main.c | patch cvs diff -kk -r1.20 -r1.21 redir.c | patch cvs diff -kk -r1.10 -r1.11 redir.h | patch cvs diff -kk -r1.10 -r1.12 shell.h | patch cvs diff -kk -r1.22 -r1.23 trap.c | patch cvs diff -kk -r1.12 -r1.13 trap.h | patch cvs diff -kk -r1.23 -r1.24 var.c | patch cvs diff -kk -r1.16 -r1.17 var.h | patch All other changes were simply the resolution of the resulting conflicts, which occured only in the merge of jobs.c. Begins to address PR: bin/5475
Back out previous vfork changes.
Now we use vfork(2) instead of fork(2) when we can.
PR/4966: Joel Reicher: Implement <> redirections which are documented in the man page.
Be more retentive about use of NOTREACHED and noreturn.
Delint.
PR/5848: David Holland: Use PIPE_BUF instead of hardcoding 4k
Fix compiler warnings.
PR/3452: Jerry Peek: Redirections of unopened fd to file failed. for arg in a b c do echo hi this is $arg 1>&3 done 3> foo
Update /bin/sh from trunk per request of Christos Zoulas. Fixes many bugs.
kill 'register'
PR/2808: fix redirection to the same file descriptor better error messages for failed pipes (from FreeBSD)
Merge in my changes from vangogh, and fix the x=`false`; echo $? == 0 bug.
convert to new RCS id conventions.
pull prototypes into scope for string functions.
clean up further. more patches from Jim Jegers
Add RCS ids.
sync with 4.4lite
44lite code
lseek long lossage.
Add RCS identifiers.
Jim "wilson@moria.cygnus.com" Wilson's patches to make C News (and other things) work.
changed "Id" to "Header" for rcsids
added rcs ids to all files
initial import of 386bsd-0.1 sources
Initial revision