diff --git a/docs.it4i/anselm-cluster-documentation/software/mpi/mpi4py-mpi-for-python.md b/docs.it4i/anselm-cluster-documentation/software/mpi/mpi4py-mpi-for-python.md
index 6c79215c00ddc14456508c3e5fcbb76fe0e7b88b..df186ef653458832d130aadfcdb6e9c2a684eef3 100644
--- a/docs.it4i/anselm-cluster-documentation/software/mpi/mpi4py-mpi-for-python.md
+++ b/docs.it4i/anselm-cluster-documentation/software/mpi/mpi4py-mpi-for-python.md
@@ -92,4 +92,4 @@ Execute the above code as:
     $ mpiexec -bycore -bind-to-core python hello_world.py
 ```
 
-In this example, we run MPI4Py enabled code on 4 nodes, 16 cores per node (total of 64 processes), each python process is bound to a different core. More examples and documentation can be found on [MPI for Python webpage](https://pythonhosted.org/mpi4py/usrman/index.html).
+In this example, we run MPI4Py enabled code on 4 nodes, 16 cores per node (total of 64 processes), each python process is bound to a different core. More examples and documentation can be found on [MPI for Python webpage](https://pypi.python.org/pypi/mpi4py).
diff --git a/docs.it4i/salomon/software/intel-suite/intel-advisor.md b/docs.it4i/salomon/software/intel-suite/intel-advisor.md
index cf25a765ce349510bd49faa4058543c87f489e2d..3d074032a5709b27148738c992fe991341ec79c7 100644
--- a/docs.it4i/salomon/software/intel-suite/intel-advisor.md
+++ b/docs.it4i/salomon/software/intel-suite/intel-advisor.md
@@ -27,6 +27,6 @@ In the left pane, you can switch between Vectorization and Threading workflows.
 
 References
 ----------
-1.  [Intel® Advisor 2015 Tutorial: Find Where to Add Parallelism - C++ Sample](https://software.intel.com/en-us/advisorxe_2015_tut_lin_c)
-2.  [Product     page](https://software.intel.com/en-us/intel-advisor-xe)
+1.  [Intel® Advisor 2015 Tutorial: Find Where to Add Parallelism - C++ Sample](https://software.intel.com/en-us/intel-advisor-tutorial-vectorization-windows-cplusplus)
+2.  [Product page](https://software.intel.com/en-us/intel-advisor-xe)
 3.  [Documentation](https://software.intel.com/en-us/intel-advisor-2016-user-guide-linux)
diff --git a/docs.it4i/salomon/software/intel-suite/intel-trace-analyzer-and-collector.md b/docs.it4i/salomon/software/intel-suite/intel-trace-analyzer-and-collector.md
index e88fff56b1fbe779b4ee4d5e94b94df281145231..62fe24fee7a803b7ba28660bf3eb1561423c85a4 100644
--- a/docs.it4i/salomon/software/intel-suite/intel-trace-analyzer-and-collector.md
+++ b/docs.it4i/salomon/software/intel-suite/intel-trace-analyzer-and-collector.md
@@ -29,7 +29,7 @@ To view and analyze the trace, open ITAC GUI in a [graphical environment](../../
     $ traceanalyzer
 ```
 
-The GUI will launch and you can open the produced *.stf file.
+The GUI will launch and you can open the produced `*`.stf file.
 
 ![](../../../img/Snmekobrazovky20151204v15.35.12.png)
 
@@ -38,5 +38,5 @@ Please refer to Intel documenation about usage of the GUI tool.
 References
 ----------
 1.  [Getting Started with Intel® Trace Analyzer and Collector](https://software.intel.com/en-us/get-started-with-itac-for-linux)
-2.  [Intel® Trace Analyzer and Collector - Documentation](http://Intel®%20Trace%20Analyzer%20and%20Collector%20-%20Documentation)
+2.  [Intel® Trace Analyzer and Collector - Documentation](https://software.intel.com/en-us/intel-trace-analyzer)
 
diff --git a/docs.it4i/salomon/software/mpi/mpi4py-mpi-for-python.md b/docs.it4i/salomon/software/mpi/mpi4py-mpi-for-python.md
index 490b2cfc89ae0ccac7425460d5bae362bc3ab834..00762481d2f8b26ee42f0181bff8803314627549 100644
--- a/docs.it4i/salomon/software/mpi/mpi4py-mpi-for-python.md
+++ b/docs.it4i/salomon/software/mpi/mpi4py-mpi-for-python.md
@@ -93,4 +93,4 @@ Execute the above code as:
     $ mpiexec --map-by core --bind-to core python hello_world.py
 ```
 
-In this example, we run MPI4Py enabled code on 4 nodes, 24 cores per node (total of 96 processes), each python process is bound to a different core. More examples and documentation can be found on [MPI for Python webpage](https://pythonhosted.org/mpi4py/usrman/index.md).
+In this example, we run MPI4Py enabled code on 4 nodes, 24 cores per node (total of 96 processes), each python process is bound to a different core. More examples and documentation can be found on [MPI for Python webpage](https://pypi.python.org/pypi/mpi4py).