It is not possible to associate more than one Windows Console with a single process. The only way to do it is with helios's trick.
The other option is to emulate a console window yourself. There are actually libraries that do this already. Try googling around "c/c++ terminal emulator" and the like. I recommend libvterm.
if i use ShellExecute and open a cmd.exe , how can i get a handle to it for read and write ?!
Is that even possible? The trick works only because there's a special program specifically made to open and listen on a pipe, and print what's sent through the pipe on the console.
hey man im not eng man and i cant speak eng very well !!!!! please be Patient if you want to help me ! if you dont , dont asnwer to my questions! i dont know every thing about c++ and sometimes i cant write my question very well in eng language. if you dont angry , please tell me more because i cant understand your means
how bout system("consoleapp1.exe"); ?? Use the system function. Additionally you can pipe your output to any of these consoles using system("consoleapp1 >> consoleapp2") (if I remember that correctly) . Just look up command line piping.
Why is that bad? Console apps are nothing more than processes. If you open multiple console apps you can have them talk to each other either using command line pumping, piping functions, or shared memory. I would prefer shared memory mapping. Command line pumping is probably a quick and dirty fix but it works. But if you require a secure environment you'd want to stick to piping and shared memory functions. -- followup the piping command is | ... so to pipe from app1 to app2 you open app1 with c:>app1 | app2 (assuming app2) is already open. Everything from printf or std::cout will now go strait to app2. Just one of a few options.
AFAIK, the pipe operator only works if the program on the right is designed to listen on a pipe. Plus, with command line piping you don't have any fine-grained redirection. You can redirect stdout to one program and stderr to another, and that's about it. And you still don't get additional consoles because the new processes use the console that's already open.
true the program on the right would require a stdin to connect the pipe. For fine grain control you'll need a sort of "operator" or agent in between your source and destination processes that knows where to redirect the output based on passed parameters, but for this amount of work you could easily do it more professionally using your method, and it would be faster since you wouldn't need a middleman process. But let me add that with a middleman process you can have this process reference an external text file to determine the appropriate redirection, so with this you can add or remove mappings dynamically but with more commands involved (adding time). There is no one way to skin a cat in programming, it's all about finding what works for your problem and sometimes thinking outside the box.
But let me add that with a middleman process you can have this process reference an external text file to determine the appropriate redirection, so with this you can add or remove mappings dynamically but with more commands involved (adding time).
I don't see why you'd need a third process to do this.
Trying to create additional processes and creating the data sharing mappings limits you to a completely static solution so you will need to know all possible inter-process maps, and code them before you compile. By separating your processes into truly individual agents you create a more dynamic solution. The 3rd process simply does the job of redirection, using a nothing more than a simple text or configuration file (which you are free to modify at anytime) to determine those mappings. Each process will call the redirecting process. Again, if you know everything beforehand then definitely stick with your operating system piping and shared memory functions and package them all up into neat c++ classes.
So, you mean for example: program 1 has streams A, B, C, and D, which all pipe information to program 2, which, based on configuration, redirects output to process 3, 4, or 5 (4 to 3 mapping)?
I don't understand why the code in program 2 needs to run as a separate process. What do you get that you can't get with simple queues, perhaps with a thread if you need asynchronicity, all inside the process of program 1?
I know you asking why not just put the job of program 2 into program 1 or program 3 and get rid of program 2? Well I suppose you could and each can access the configuration file separately to determine where to pipe the data, but now each agent must do 2 jobs (it's default job and routing) instead of just the job it was designed to do. But with program 2 now program 1 can go back to doing it's job and let program 2 handle the routing issues. It's like specialization. Additionally if you wanted to fine tune your routing code you simply need to replace program 2 with a better design instead of recoding every program. Additionally each call using system("...") would allow the agent to retrieve any new process handles if you swapped in a new redirector. I almost picture a sort of darwinian theme here. Not sure where this dynamic agent based way would apply in the real world but it is way of doing things if needed.