bywave最新版-outline

高匿名ip伕理

Since becoming part of industry, I've come to see the advantage of doing my work on a remote machine (a DevServer). Now historically, I had been this Linux nut, who loved using tiling window managers. My workflow has been launch a handful of terminals to begin working and launch more as needed. This was a rather mundane setup that just worked. Unfortunately, it just doesn't work in industry. I encountered several challenges:

  • Launching a terminal connects me only to my machine
  • Cannot work locally (challenges through the roof to make this even possible)
  • Accessing the server requires SSH, SSH requires the use of two factor auth (no auto login)
  • 国内高匿免费HTTP伕理IP__第4075页国内高匿:2021-12-30 · 国家 IP地址 端口 服务器地址 是否匿名 类型 速度 连接时间 存活时间 验证时间 118.125.78.8 6666 四川眉山 高匿 socks4/5

So the question became how can I reproduce, if not create an enhanced version of, my previous environment with minimal effort?

  • Find the simplest terminal emulator (st)
    • For better colors, add the solarized patch
    • 如何检测伕理ip是否高匿-变极ip修改器:2021-6-15 · 如果使用的高级匿名伕理IP,网页显示的内容就只有伕理IP的地址了,将自己真是的IP 完全隐藏。 相关资讯 如何保护手机安全? 手机安全有多少人重视呢?如今就连公共的手机充电桩都可能有网络病毒,照片也可能骗过人脸识别系统进行支付 ...
  • Use screen locally
    • A keyboard only way to access scrollback buffer (content that was printed out that no longer fits on the screen)
    • Make the local screen respond to easier to access keys: Alt-a instead of Ctrl-a
    • Quicker access to my screen windows I'm looking for (windowlist -b) and make this easier Alt-a ' (instead of Alt-a ")
  • Rename the terminal and screen window with the PWD and command executing inside (fun stuff with PROMPT_COMMAND)
  • Use screen remotely
    • Always ssh into a long-lived state
    • Restart that state automatically, in case I manually close it out or the machine is rebooted
    • No need to SSH in for each additional window, just create new windows in the current screen session
  • Use mosh
  • No need to deal with SSH flakiness -- automatic connection reestablishment after lossy network usage or suspend / resume
  • 如何检测伕理ip是否高匿-变极ip修改器:2021-6-15 · 如果使用的高级匿名伕理IP,网页显示的内容就只有伕理IP的地址了,将自己真是的IP 完全隐藏。 相关资讯 如何保护手机安全? 手机安全有多少人重视呢?如今就连公共的手机充电桩都可能有网络病毒,照片也可能骗过人脸识别系统进行支付 ...

The only thing really missing is that I would love to be able to create multiple distinct views of my remote screen session without multiple SSH sessions. Think of it this way, if I have 3 windows open on a remote screen session. I can only view one of them at a time unless I ssh in again and attach to that same session. Ideally, I could move the multiplexing locally, alas, I couldn't figure a clean way of moving the screen unix domain socket locally and have that local screen connect to it.

Now it is time for the useful code bits.

My .screenrc:

vbell off
startup_message off
autodetach on
defscrollback 100000
shelltitle '$ |/home/davidiw bash'
#hardstatus string "%h"
#caption always "%{= kw} %-w%{= wk}%n*%t%{-}%+w%{= kw} %=%d %M %0c %{g}%H%{-}"
#hardstatus alwayslastline

escape ^Aa
register S ^A
bindkey "^[a" process S # On the remote machine, i set this to "^[s", so I don't have to type Alt-a a
bind "\'" windowlist -b
bind "\"" select

Append this to .bashrc to get nice names for screen and xterm title's and to start screen with the default session (home):

function make_title {
  if [[ $BASH_COMMAND == 'trap "make_title" DEBUG' ]]; then
    cmd="bash"
  else
    cmd=$BASH_COMMAND
  fi
  echo -ne "\033]0;$PWD $cmd\007"
  echo -ne "\033k$PWD $cmd\033\\"
}

term=$TERM

case $term in
  *screen*)
    ;&
  *term*)
    export TERM="xterm-256color"
    export PROMPT_COMMAND='trap "make_title" DEBUG'
    ;;
esac

case $term in
  *term*)
    exec /bin/bash -c "screen -AUx home || screen -AUS home"
    ;;
esac

Future work:

  • Shared local vim buffer
  • Shared remote vim buffer
  • A git repository to make reproduction easy
  • 高匿 IP 伕理仍会有办法查出源 IP 吗 - V2EX:2021-7-18 · 伕理的本质只是用来改善路由实现加速或绕过防火墙实现突破。想要实现匿名的话需要额外增加好几个其他的措施。不过考虑到这里基本不存在真正需要匿名的人,就不说关于匿名的问题了。
  • Clickable URLs in the Terminal

bywave最新版-outline

匿名伕理ip

A few colleagues and I are working on OS development. While most of the development has taken place on MacOS, I prefer Linux and primarily use rolling release distribution called Arch. On the Mac, my colleagues obtained GCC 4.8 from mac ports and everything compiles just fine for them. However, having a rolling release version of Linux implies I will always have the latest and greatest versions on my system. Usually that is fine, sometimes not as in this scenario. At some point, GCC started introducing UD2 instructions instead of emitting errors. Now specifically, a UD2 instruction stands for undefined instruction and causes the system to halt. Why on earth would any compiler perform this function? It was absolutely baffling to see this type of behavior from a properly compiled program that was built using -Wall.

So I did some searching in the assembly output to find where the UD2 instruction was being generated and found one in the following code snippet:

static struct pci_func * alloc_pci_func(){ if (pci_dev_list.total_dev == N_PCI_DEV) { KERN_DEBUG ("Alloc pci_func from pci_dev_list error! no available \n"); return NULL; } return &pci_dev_list.dev[pci_dev_list.total_dev++]; }


Where do you think the problem is? My initial reaction was that maybe this is due to some fancy overflow detection not working quite right, notice that we increment total_dev but limit from incrementing it beyond N_PCI_DEV. This did not work. So I tried a slightly different method, I looked at our current optimization level and it happened to be -Os, or effectively -O2 with some tweaks for size of output. So I went to -O2 and then -O1, at -O2 the issue still existed whereas in -O1 it did not. Taking a peek at the list options enabled by -O2, I set the compilation to -O1 and began enabling -O2 options explicitly until I stumbled upon the problem: -fisolate-erroneous-paths-attribute. This flag happens to do the following: Detect paths which trigger erroneous or undefined behaviour due a NULL value being used in a way which is forbidden by a "returns_nonnull" or "nonnull" attribute. Isolate those paths from the main control flow and turn the statement with erroneous or undefined behaviour into a trap. Brilliant, the others figured it was better to turn in return NULL is undefined behavior than warning us that maybe we should look into a different convention. Frankly, I'm not sure what the correct convention should be, perhaps a panic? But that seems a little bit harsh especially if the system can handle running out of limited resources. So to keep our -Os setting I also added the following compiler flag: -fno-isolate-erroneous-paths-attribute. Fortunately I found my bug issue, but it seems to be expected behavior from GCC. Mind you, this isn't the only example of a GCC UD2 issue.

bywave最新版-outline

Race Conditions Resulting in Double Free

In working with Qt and its slot and signals, I've encountered quite a few double free errors despite my use of QSharedPointer. One particularly puzzling situation, I have stumbled upon involves the following model:

  1. An object, say controller, owns another object, say agent
  2. Controller subscribes to a event (signal) from agent called closing
  3. If the agent issues a closing signal, the controller removes it from a list of known agents (effectively freeing it as a result of the QSharedPointer).
  4. If the controller is closing, it calls close on the agents and then removes them from its list.
The double free occurs when the controller closes before the agents, effectively causing the agent to notify the controller it is being closed while both are being destructed. This probably occurs because the Qt signal disconnect code would happen in the base destructor (QObject), which would be called after the Session destructor. The easy resolution was to disconnect all signals except destroyed.

bywave最新版-outline

高匿名ip伕理

A common problem in designing and writing distributed systems is the handling of the wire protocol. To help in solving these problems both many programmers go it on their own writing their own serialization tools, while others trust third-party tools to ease their development.  After doing a little bit of both I'm not convinced I prefer one approach over another.

The example:


First let me provide an example of a message:

SampleMessage {
  int record; // unique id
  Type type; // some enumed field
  bytes message; // The data
  bytes signature; // Integrity and authenticity of this "SampleMessage"
};

Custom made serializer:

Using our own serializer, we could do the following assuming a SampleMessage msg:
bytes buffer;


buffer.append(msg.record());
buffer.append(msg.type());
buffer.append(msg.message().length());
buffer.append(msg.message());
buffer.append(msg.signature().length());
buffer.append(msg.signature());

And then on the parsing side:
bytes buffer;

msg.set_record(buffer.read_int(0));
msg.set_type(buffer.read_int(4));
int length = buffer.read_int(8); 

msg.set_message(buffer.mid(12, length));
int signature_length = buffer.read_int(12 + length); 
msg.set_signature(buffer.mid(12 + length + 4, signature_length));

So the major caveats are the following: what is the size of Type, is it uniform across all platforms? Also we're making a lot of potentially unnecessary copies for what might be large datagrams.

Third-party serializers (without message definition):

Alternatively, let's assume we have something like QDataStream:
QByteArray data;
QDataStream stream(&data, QIODevice::in);
stream << msg.record() << msg.type() << msg.message() << msg.signature();
// or maybe even
stream << msg;

For parsing:
QDataStream stream(data);
stream >> msg;
// or maybe not...
int record;
Type type;
QByteArray message;
QByteArray signature;
stream >> record >> type >> message >> signature;
msg.set_record(record);
...

In this case, we just have to check that our output is sane or perhaps look at the QDataStream and ensure that it is still in good working order (Status != ReadPastEnd), but how do we check the signature matches the tuple (record, type, message) in any efficient manner?

Third-party serializers (with message definition):

A completely different serializer, protobufs would work as such:
std::stringstream stream(std::stringstream::out);
msg.SerializeToOstream(&stream);
string output = stream.str();

And on the return:
std::stringstream stream(output, std::stringstream::out);
msg.ParseFromIstream(&stream);

Protobuf doesn't handle the signature issue any easier and requires both an external compiler and a library.

Thus far...

Protobufs would be great if we could encapsulate a Message within a SignedMessage, then we *should* be able to get the original character array used for constructing the Message and verify that the signature is correct.  Unfortunately that does not happen.

QByteArray does allow for constructing a QByteArray from another without doing a copy of the underlying array.  However, we do not have the access we need from QDataStream to know where into the QByteArray to construct the base (unsigned) message.

Using our own method allows us to have this fine grained control but at the cost of writing more expressive code and having more debugging routines.

Similar Packets

Ideally we want to reduce our packet parsing as much as possible.  So we can embed multiple packets in the same path.  Using something like protobuf, where we must define the data we expect to be pushing around, makes it complicated for this arbitrary behavior.  Requiring us to embed packets of one type as bytes in another or requiring this lower level packet to know about higher layer packets breaking modularity.  The same could be said about QDataStream, but then again it allows us to avoid unnecessary copies.  In either case, both scenarios feel unnatural.  If we want our home grown packets to have these features, the code will start feeling bloated and potentially complex -- welcome to a whole new world of coding bugs.

I'm still brainstorming on my conclusion and hopefully I'll update when I'm satisfied until then....

bywave最新版-outline

Social Keys

Back in my days at 匿名伕理ip, I worked on a project called SocialVPN with the intent of using Facebook as a medium for exchanging public keys amongst friends.  Now I am revisiting this with another project called Dissent, in which we want to use public keys from a group of users who may not be friends.  Ignoring the why for each of these applications, let me describe the way things were, where they have gone, what I would like to see, and what could be minimally done.

Back with SocialVPN, we created something called a "desktop application" to store the keys in Facebook.  An application in this sense utilizes Facebook in some means to enhance their experience in another domain, such as a game, organizational things, searches, or identity... anything that can benefit from social identity or the contents of a Facebook profile.  A desktop application, unlike a Web application, was a standalone application that did not require anything besides Facebook.  Unfortunately, this flexibility, is probably why desktop applications did not live very long.  Using Facebook application, we could store the keys within an application specific store accessible only to the application.  Unfortunately, the applications required both the Application ID and Secret be embedded within the application, and thus a hacker could retrieve both of them, muck around in the application's data store, and perform man-in-the-middle attack.  I suspect this was one of the many reasons why Facebook moved to Web Applications and away from Desktop Applications.

One day, in the midst of success, this suddenly stopped working.  It was horrible!  We quickly created a Web application to do the same thing, but this time, I suppose it was actually "safe" (assuming we did not introduce any accidental bugs), but unfortunately, this meant the clients had to trust us.  I did not want that!  Anyway, we got fed up over all this nonsense and began investigating GoogleTalk and XMPP.  Wow, that was wonderful, long story short, we found everything we needed to exchange keys between friends in a reasonable fashion without relying on a third-party services (besides Google, of course).

Fast-forward several years, and again, we are considering the same task of inserting keys into Facebook.  I had hoped that the years would have been good to developers and Facebook would have refined their APIs some how or another.  Taking a step back, let me first explain what we really want.  Ideally, a public key would be a first class citizen in the SN realm, such that under your info, there would be a special field where your key is stored.  Perhaps the key would actually be signed by Facebook so it could be easily redistributed outside of Facebook, yet still trusted.  In terms of API, perhaps the Facebook Graph API could be extend as such: http://graph.facebook.com/$userid?fields=public_key which would allow applications to retrive a base64 encoded key.  Furthermore, this interface should be open to the public or as much as the profile is, so that the user can authenticate to members that are not friends, but they have some association, such as in a common group.  Unfortunately, this does not exist, nor have I seen anything coming from SNs like it.  I will admit that there was a position paper 2 years after our SocialVPN papers clamoring for SocialKeys, I should read it and update this entry...

So where are we now?  While Facebook's APIs have matured, so has their privacy measures.  I suppose the APIs are largely the same from where we left off, we just never envisioned using Facebook's own internal data structures to store the key.  Well, we did at one point consider using steganography to embed it within the user profile picture, but I think that can be challenging since the picture probably has been internally manipulated by Facebook, which would probably destroy traces of the public key.  Other fields can also be populated, such as notes or picture albums, which can be searched by their subject or title, respectively, using FQL.  Unfortunately, notes does not allow for an application to change the privacy setting and uses the default user setting for notes, while picture albums could potentially be used, the privacy setting cannot be looser than what the application is set to.  By default applications are set to the profile as well, thus the only Facebook limited option that would be guaranteed to work, would require user interaction.  Furthermore, an application must be granted the privilege of creating, editing, or deleting content.  Once this privilege has been granted by a user, it must be manually deleted through the applications menu, not a very user-friendly interface for an application, and tin foil hat and many users should be wary of an application that needs constant write access for a one time (or occassional) operation.

A potential solution that should always work, would be for us to have an application, that reads the key, prints a base64 version for the user to copy into a note, and then verifies that the user has done so correctly.  This may in fact be a completely reasonable approach, but it does require greater user interaction than most casual users are interested in, which would certainly limit our application's appeal.

Yet another issue that may come up in working with a profile is the issue with "hidden users," or users who show up in the group or have some presence, but their profile is only accessible to friends.  If an application needs uniform access to profiles, it should be able to detect these types of users, warn them that their profile is too restrictive, and prevent the accidental use of their credentials to impact the rest of the group.  For example, a friend of these users may see a different set of public keys than someone who is not.

So thinking rather practical, what is one way we can accomplish this without any "additional" user interaction?  Why not let an external app manage the data, back like we did with SocialVPN?  These days Facebook has teamed with Heroku to provide a free service, so devs do not even need to manage or pay for the service.  While this seems like a great solution, it changes ownership of the key from the profile, to the application.  In the earlier iterations discussed above, the profile owned the key, independent of the application.  In this system, the application owns the key, but links it to a profile.  Granted this is probably more tin foil hat wearing than anything else, but because the profile key does not appear in the users profile, another user must trust the server that it is the correct public key.  I also believe this is a bit stronger of an assumption that trusting the social network, which is inherent in the design anyway.  A third-party developer is a stop gap, until social networks embrace social keys, perhaps I need to come up with persuasive arguments for why they should.

So maybe I can get some feedback on this problem.  That would be great.  Alternatively, we may just go with the external application, since it is guaranteed to work, but also discuss the "safer" alternatives.

bywave最新版-outline

Recursive Virtualization Xen in KVM

As I wrap up my contributions to UF's, or more importantly Renato Figueiredo's, Virtual Computing course, I am putting together a Debian Xen image, which should work in Simics and a host of other VMMs. Because VMware is the least compatible, I always build my initial image in VMware and then quickly continue working on it in a chroot environment of course after converting it into a raw disk format. Anyway, in my most recent attempt, I was trying to figure out why Xen would not boot inside KVM, when it worked fine in VMware and Qemu.

First a little background, Xen was compiled for 32-bit and running a 32-bit userspace, while my KVM install was 64-bit. As you may or may not be aware, x86-64 does not support privileged rings any more and instead relies on the paging modes (U/S) for privilege. Xen 32-bit relies on the rings for performance purposes. So you would think that KVM would support the rings as well. As it turns out, KVM 64-bit does not appear to virtualize the rings and thus Xen 32-bit does not work.

While this is still somewhat of a hunch, I tested a Xen 64-bit, which does not use the privilege rings. In KVM 64-bit, it worked flawlessly. Leaving me two more things to check: determine if Xen 32-bit boots in KVM 32-bit and if a simple ring test boot code works in 32 or 64 bit KVM. The former would be useful for other's, while the latter would be a conclusive simple test to verify the functionality.

Anyway, the point is, you can run Xen in KVM, if they are compiled for the same platform. I'd be curious to see if you can run a 64-bit VM in 32-bit KVM and if so, if the Xen 64-bit would work.

bywave最新版-outline

Dangers of IPTables NAT with DHCP

For Archer, I have setup a small private LAN for all the core resources to share with a common out into the main network, which connects to the Internet. I use Linux's IPTables via masquerade for NAT or routing the packets from the private network into the Internet and DHCP to automatically assign IP addresses to the resources. Recently, I extended the router to support other hosts to use it to bridge into the local network, which may or may not be responsiblef or a whole host of other issues. The two primary issues were routing of bridged packets through IPTables and DHCP messages traversing the bridge, both request and replies from both sides.

The routing of bridged packets was particularly difficult to figure out. I had recently setup a web server, which had a public IP address and was bridged into the LAN. It was pingable and could ping other IP addresses within the LAN, but it couldn't ping the gateway. Furthermore, it was accessible from the Internet but could not initiate communication with the Internet. It was very strange behavior. I whipped out tcpdump and quickly noticed that packets it was sending out via the bridge were being picked up by IPtables masquerade and having their source address change. So that when the packet came back, it had no where to go. The resolution to this problem was pretty easy: add "-s ip_range/mask" to the masquerade iptables line. So this:
iptables -t nat -A POSTROUTING -o bridged -j MASQUERADE became
iptables -t nat -A POSTROUTING -s 10.0.1.0/24 -o bridged -j MASQUERADE
where bridged is the network bridge into the LAN, i.e., a Linux bridge connecting eth0 to other devices wanting access to the LAN and 10.0.1.0/24 is the private address space. Now masquerade will only forward packets with the source address 10.0.1.0/24.

Perhaps, doing this opened up the other problem, which was that resources on the LAN were somehow getting DHCP requests to the private network. This never makes people happy and worst of all, those individuals were losing their Internet connectivity. This could be related to forwarding of broadcast Ethernet packets as well. The resolution for this was a bit easier, simply drop all DHCP packets that come across the NAT:
iptables -A FORWARD -p udp --dport 67 -i bridged -j DROP
iptables -A FORWARD -p udp --dport 67 -o nat -j DROP
In this case, packets coming from the bridged device (-i bridged) are those on the LAN looking for IPs. The packets coming from the nat device (-o nat) are those on the private network looking for IPs. These rules effectively prevent the retransmission of dhcp packets. The reality is, I probably should prevent the transmission of multicast / broadcast messages from crossing the NAT, since there exists no gateway for packets to get into the private network.

Anyway, this was a rather unfun discovery as I found NOWHERE online that discusses these two issues when configuring NATs. I hope this turns out to be useful for someone else out there.
Older Posts Home