Jump to content

devnullprod

Member
  • Content Count

    161
  • Joined

  • Last visited

3 Followers

About devnullprod

  • Rank
    Regular

Contact Methods

  • Website URL
    http://wipple.devnull.network

Profile Information

  • Gender
    Male
  • Location
    Upstate NY
  • Ripple Address
    rhkvfNv6tzh6CMfpXZdX2t7HGN2ZX46Tco

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. The 6th contest challenge is live!!! Also starting this round, the prize amounts are doubled!!! That's right! Win 10 XRP if you guess the closest answer! And a random retweeter will win another 10 XRP. Good luck!
  2. Hrm weird, I use both Firefox and Chrome to test the site and everything seems to work locally (famous last words right!) Mind if I ask what platform you are testing this on, maybe I can try to clone your env. Thanks for letting us know!
  3. Hey @mitchlang009 we also provide the ability to monitor any network of choice. Simply set the rippled URI to look at here! Also you can issue new transactions against the test net by creating an account here, setting the Key/URI in the settings, and submitting the corresponding form here! </shameless-self-promo>
  4. @dgoddard this is complex topic but to try and summarize, rippled stores ledger data in a tree-based data structure known as a 'SHAMap', the logic behind this implementation is primarily in the shamap directory in the project folder. As far as persisting the shamap to the disk, rippled implements a key/value database known as the nodestore which may be configured to work with one of two backends, rocksdb or nudb. If you want to read the data structures directly off disk you'll need to use the appropriate client library and correctly construct the lookup keys as well as decode the binary values, which again is not an easy task... ... But luckily we can help! We wrote a simple reference implementation of the nodestore parser in our XRP client project, XRBP. You'll have to learn a little bit of ruby but the modules encapsulating the data structures should be fairly self contained and easy to understand. Also there is our source code analysis which may be of assistance (just note the last sections towards the end are in progress / not uploaded yet). Hope this helps!
  5. The fifth contest challenge is now open!!!
  6. This new format does have an advantage in the sense that it can be parsed in both an incremental / serial manner as well as in parallel chunks. Also i'd imaging it could be encoded in a traditional 'barcode' format pretty easily. How would the application parse this format / generate the keypairs? Getting back to the original $topic, we bit the bullet and pulled in the ruby bindings to the bitcoin secp256k1 library and are just using that as an interim solution for key generation & bindings. At some point, it would be nice to implement this in pure ruby to remove the c-dependency, but for now efforts are best focused elsewhere, and not reinventing the wheel. Glad to say we reached our goal with the original motivation behind all this, groking and implementing a RTXP endpoint which we have successfully used to receive P2P messages from the overlay network. We're planning on using this, our nodestore implementation, and additional analysis for the next section of our codebase analysis in the near future.
  7. Cool, thanks for the info Nik. I understand your concerns pertaining to seed security and the "standard way" for deriving the key from the seed. We will refine our approach to incorporate your suggestions going forward (while trying to keep it easy to understand for reference purposes). Stay tuned for updates!
  8. Agree. This effort mostly started off to generate node keys' for RTXP communication, but expanded into more general key / entity generation. In the former case the seed family doesn't apply as we just need a random private key, but yes for account generation we'll look into the seed family soon.
  9. Of course random seed collision is of low probability given a sufficient minimum seed length, but we didn't find this constraint in the source. Again, there's alot there so it might just be something we missed
  10. This is was mostly to handle the case where secure random wasn't used and the developer / user chose to go with a fixed seed. But agree that in the former case, it's unecessary. The line above the securerandom call is a seed which if uncommented (and the following securerandom call removed) is used as the basis for the private key digest. One thing that's puzzling me (or perhaps this is a final edge case that we have yet to address) is the lack of a 'salt' variable, eg if two users so happen to randomly use the same seed won't they both generate the same account?
  11. +1 good point, right now this modules uses the default 'securerandom' interface to generate the default seed, but we'll look into how exactly this is implemented & the fallbck cases. Thanks for the link to the nodejs module, its great to have several parallel implementations to use as a reference.
  12. Hello and happy Tuesday everyone! (or Wednesday depending where you are in the world!) We hope you're having a pleasant day/night wherever you are! Just a quick post today pertaining to key and account generation. We've been diving into alot of specifics w/ rippled data structures recently, now that our nodestore reader is mostly wrapped up (at least for the rocksdb backend), we've begun looking at other components and subsystems. We're not looking to do a full fledged rippled implementation, but rather just implement specific components for easier integration into other systems without having to go through costly network / foreign-interface / other calls. Also since our implementation is written in ruby a language designed to be very programmer friendly and human-understandable, perhaps it can be used as a terse "reference implementation" for the various standards that make up the XRPL. It's not there yet, but we can dream.... In any case, back to $topic. As part of our library we threw together a few modules to generate keys & subsequently account, node, validator, etc IDs that are compliant with the network. We mostly digged through the rippled source to understand specifics, but this page was also instrumental in completing the picture (again many thanks to the ripple-devs & others for putting all this awesome documentation together, many open source projects are lacking this!!!). AFAIK the logic is complete & compliant with the rippled implementation, and we even verified it by writing a few test modules, utilizing parsers in the C++ rippled codebase to test our generated addresses. We just wanted to throw this out here to make sure that it's looks good before we started advertising it / sharing it with the general public. I'd hate to hear that a bug in our key generation code resulted in funds being sent to an invalid address! To simply things, I also threw together a gist with the serial list of calls from the relevant XRBP modules. If anyone had a few moments to look at it, it would really be appreciated. (there isn't anything too ruby-specific here, perhaps the only thing being the pack / unpack methods) Many thanks!
  13. Just to complete this issue, I was able to get this working with the ruby bindings. The first step is to build rocksdb according to the instructions, make sure to also run 'make shared_lib' to build the shared libraries (some compile time flags may be needed: CXXFLAGS='-Wno-error=deprecated-copy -Wno-error=pessimizing-move') To simplify things you can run 'make install' to install the library system wide but I personally don't like doing this. To get around this make sure to specify the location of your rocksdb checkout when installing the gem: export ROCKSDB_DIR=/path/torocksdb gem install rocksdb-ruby -- --with-rocksdb-dir=${ROCKSDB_DIR}/ --with-rocksdb-include=${ROCKSDB_DIR}/include --with-rocksdb-lib=${ROCKSDB_DIR}/ -fPIC (One caveat, inorder to get the 'max_open_files' option working we had to include a custom rocksdb-ruby patch and rebuild the gem) Once this is said and done the following script accomplishes the same as above: require "rocksdb" ledger = "26706B14D370E69DE1F3A7ED6972CABC8FD249D3EB55510178A4F6765F58E69D" ledger = [ledger].pack("H*") puts ledger.bytes.join " " rocksdb = RocksDB::DB.new "/var/lib/rippled/db/rocksdb/rippledb.0899", {:readonly => true, :max_open_files => 2000} puts "DB: #{rocksdb[ledger].unpack("H*").join(" ")}" Run it with: LD_LIBRARY_PATH='/path/to/rocksdb/' ruby rocks.rb Output: 38 112 107 20 211 112 230 157 225 243 167 237 105 114 202 188 143 210 73 211 235 85 81 1 120 164 246 118 95 88 230 157 DB: 0000000000000000014c57520002c612e501633ddeccef1a6bd5ab292ed5c988a58f71909262fe055f30830832f3c4305d5ac44a2ad5251667fb0c7d64d4735958b34400d15c599d073e007b4f7d06f309f1559c91c73284d054ac346e5a094b24311cd2bbf9aeb72c5f819d0a99ae9f8bc8b1acca61cc6b7024462ac124462ac20a00 Easy as that!
  14. OK, after quite a bit of fiddling around with it, we got it working first with the ldb tool and then in C++: $ ldb --db=/var/lib/rippled/db/rocksdb/rippledb.0899 --try_load_options --ignore_unknown_options --hex get 0x26706B14D370E69DE1F3A7ED6972CABC8FD249D3EB55510178A4F6765F58E69D > wal_dir loaded from the option file doesn't exist. Ignore it. 0x0000000000000000014C57520002C612E501633DDECCEF1A6BD5AB292ED5C988A58F71909262FE055F30830832F3C4305D5AC44A2AD5251667FB0C7D64D4735958B34400D15C599D073E007B4F7D06F309F1559C91C73284D054AC346E5A094B24311CD2BBF9AEB72C5F819D0A99AE9F8BC8B1ACCA61CC6B7024462AC124462AC20A00 The C++ example required us to clone and build rocksdb from the upstream source as the version shipped with our linux distributon (Fedora 28) does not include some required components (snappy compression). Once rocksdb was built (see docs on github) we were able to use the following code & compiler command to build an executable that could read from the db: // Extract the ledger specified below from a rippled rocksdb database // // Make sure you have a recent version of rocksdb available and make sure to // install dependencies before building: // https://github.com/facebook/rocksdb/blob/master/INSTALL.md // // If cloned/built in ../rocksdb, you can compile this with: // // g++ -I../rocksdb/include -I../rocksdb/include/rocksdb/ -I../rocksdb/ rocks.cpp ../rocksdb/librocksdb.a // -lpthread -lsnappy -lbz2 -lz -std=c++11 -faligned-new // -DHAVE_ALIGNED_NEW -DROCKSDB_PLATFORM_POSIX -DROCKSDB_LIB_IO_POSIX -DOS_LINUX -fno-builtin-memcmp // -DROCKSDB_FALLOCATE_PRESENT -DSNAPPY -DZLIB -DBZIP2 -DROCKSDB_MALLOC_USABLE_SIZE // -DROCKSDB_PTHREAD_ADAPTIVE_MUTEX -DROCKSDB_BACKTRACE -DROCKSDB_RANGESYNC_PRESENT // -DROCKSDB_SCHED_GETCPU_PRESENT -march=native -DHAVE_SSE42 -DHAVE_PCLMUL -DROCKSDB_SUPPORT_THREAD_LOCAL #include <algorithm> #include <iostream> #include "rocksdb/db.h" #include "rocksdb/utilities/options_util.h" #include "rocksdb/table.h" #include "rocksdb/filter_policy.h" const char* ledger = "0x26706B14D370E69DE1F3A7ED6972CABC8FD249D3EB55510178A4F6765F58E69D"; std::string HexToString(const std::string& str) { std::string result; std::string::size_type len = str.length(); if (len < 2 || str[0] != '0' || str[1] != 'x') { fprintf(stderr, "Invalid hex input %s. Must start with 0x\n", str.c_str()); throw "Invalid hex input"; } if (!rocksdb::Slice(str.data() + 2, len - 2).DecodeHex(&result)) { throw "Invalid hex input"; } return result; } std::string StringToHex(const std::string& str) { std::string result("0x"); result.append(rocksdb::Slice(str).ToString(true)); return result; } int main(void){ std::string k = HexToString(ledger); rocksdb::Slice slice = k; rocksdb::Options options; options.max_open_files = 2000; // cap open files for performance rocksdb::DB* db; rocksdb::Status status = rocksdb::DB::OpenForReadOnly(options, "/var/lib/rippled/db/rocksdb/rippledb.0899", &db); if (!status.ok()){ std::cerr << "Err "; std::cerr << status.ToString() << std::endl; return 1; } std::string value; status = db->Get(rocksdb::ReadOptions(), slice, &value); if (!status.ok()){ std::cerr << "Err "; std::cerr << status.ToString() << std::endl; return 1; } std::cout << "Value: " << StringToHex(value) << std::endl; return 0; } Output: Value: 0x0000000000000000014C57520002C612E501633DDECCEF1A6BD5AB292ED5C988A58F71909262FE055F30830832F3C4305D5AC44A2AD5251667FB0C7D64D4735958B34400D15C599D073E007B4F7D06F309F1559C91C73284D054AC346E5A094B24311CD2BBF9AEB72C5F819D0A99AE9F8BC8B1ACCA61CC6B7024462AC124462AC20A00 The final step for us will be to get the higher level rocksdb bindings working against the custom build rocksdb lib but I don't foresee that being too big of a challenge. Thanks again for the help!
  15. Understood, though in the case of ledgers, a direct lookup based on hash should work should it not? (would anyone be able to try the C++ example I gave? you can compile it with g++ rocksdb.cpp -lrocksdb) I seem to have traced the entire path, the call that I was missing was literally between applyTxs and the calls to flushDirty, specifically the call to OpenView#apply passing the new Ledger instance created a few lines above results in the SHAMaps being modified (OpenView#apply -> RawStateTable#apply -> Ledger#rawErase|rawInsert|rawUpdate and OpenView#apply -> Ledger#rawTxInsert with Serializer actually serializing the SLEs to NodeStore Object data) Still trying to figure out the manual db lookup though...
×
×
  • Create New...