php get URL of current file directory

php get URL of current file directory

Use echo $_SERVER[PHP_SELF];

For example if the URL is http://localhost/~andy/test.php

The output would be:

/~andy/test.php

Thats enough to generate a relative URL.

If you want the directory your current script is running in – without the filename – use:

echo dirname($_SERVER[PHP_SELF]);

In the case above that will give you /~andy (without test.php at the end). See http://php.net/manual/en/function.dirname.php

Please note that echo getcwd(); is not what you want, based on your question. That gives you the location on the filesystem/server (not the URL) that your script is running from. The directory the script is located in on the servers filesystem, and the URL, are 2 completely different things.

There is also a function to parse URLs built in to PHP: http://php.net/manual/en/function.parse-url.php

If your URL is like this: https://localhost.com/this/is/a/url

$_SERVER[DOCUMENT_ROOT] – gives system path [/var/www/html/this/is/a/url]

$_SERVER[PHP_SELF] – gives the route of the current file (after the domain name) [/this/is/a/url]

$_SERVER[SERVER_NAME] – gives the domain name [localhost.com]

$_SERVER[HTTP_REFERER] – gives the correct HTTP(S) protocol and domain name. [https://localhost.com]

If you would like to get the full url, you can do something like:

echo $_SERVER[HTTP_REFERER] . $_SERVER[PHP_SELF];

However, I do believe in this case, that all you need is the relative path.. and in that case you should only need to use $_SERVER[PHP_SELF];

php get URL of current file directory

Ive found a solution here:
https://stackoverflow.com/a/1240574/7295693

This is the code Ill now be useing:

function get_current_file_url($Protocol=http://) {
   return $Protocol.$_SERVER[HTTP_HOST].str_replace($_SERVER[DOCUMENT_ROOT], , realpath(__DIR__)); 
}

javascript – How can I make XHR.onreadystatechange return its result?

javascript – How can I make XHR.onreadystatechange return its result?

You are dealing with an asynchronous function call here. Results are handled when they arrive, not when the function finishes running.

Thats what callback functions are for. They are invoked when a result is available.

function get(url, callback) {
    var xhr = new XMLHttpRequest();
    xhr.open(GET, url, true);
    xhr.onreadystatechange = function () {
        if (xhr.readyState == 4) {
            // defensive check
            if (typeof callback === function) {
                // apply() sets the meaning of this in the callback
                callback.apply(xhr);
            }
        }
    };
    xhr.send();
}
// ----------------------------------------------------------------------------


var param = http://example.com/;                  /* do NOT use escape() */
var finalUrl = http://RESTfulAPI/info.json?url= + encodeURIComponent(param);

// get() completes immediately...
get(finalUrl,
    // ...however, this callback is invoked AFTER the response arrives
    function () {
        // this is the XHR object here!
        var resp  = JSON.parse(this.responseText);

        // now do something with resp
        alert(resp);
    }
);

Notes:

  • escape() has been deprecated since forever. Don not use it, it does not work correctly. Use encodeURIComponent().
  • You could make the send() call synchronous, by setting the async parameter of open() to false. This would result in your UI freezing while the request runs, and you dont want that.
  • There are many libraries that have been designed to make Ajax requests easy and versatile. I suggest using one of them.

You cant do it at all for asynchronous XHR calls. You cannot make JavaScript wait for the HTTP response from the server; all you can do is hand the runtime system a function to call (your handler), and it will call it. However, that call will come a long time after the code that set up the XHR has finished.

All is not lost, however, as that handler function can do anything. Whatever it is that you wanted to do with a return value you can do inside the handler (or from other functions called from inside the handler).

Thus in your example, youd change things like this:

    xhr.onreadystatechange = function()
    {
        if (xhr.readyState == 4)
        {
            var resp = JSON.parse(xhr.responseText);
            //
            // ... whatever you need to do with resp ...
            //
        }
    }
}

javascript – How can I make XHR.onreadystatechange return its result?

For small edit talking about post: https://stackoverflow.com/a/5362513/4766489

...
if (typeof callback == function) {
     //var resp  = xhr.responseText;
     var resp  = JSON.parse(xhr.responseText);
     callback(resp);
}
...

And when you call

...
function(data) {
    alert(data);
    /* now do something with resp */
}
...

php – mysqli_real_escape_string – example for 100% safety

php – mysqli_real_escape_string – example for 100% safety

Your isolated and simplified example is technically safe.

However, there are still two problems with it:

  • the assumption: the very statement of question is made out of the assumption that mysqli_real_escape_string() is related to any security issues. Which is but a grave delusion. This is a string formatting function, that protects you from SQL injections only as a side effect. But such a protection is neither the goal nor the purpose of the function. And therefore it should never be used for the purpose.
  • the inherent separability of the code you posed. The protection consists of three parts:
    • setting the correct encoding
    • escaping special characters
    • wrapping the escaped value in quotes

It is not only the fact that some of these obligatory measures could be forgotten but again, the statement of question stresses only on a single part – escaping. It is only escaping which is always accented on, while two other measures get hardly mentioned at all. Just look at your question – you meant the code but asked about a function. So any literal answer to the question you asked will make a fatally wrong impression that mysqli_real_escape_string() is all right.

In short, the statement of question helps to promote the most dangerous of PHP related delusions: that this function protects from SQL injection.

Unlike this complex three-part equation, prepared statements constitute an inseparable measure. You cannot forget one part. You cannot misuse it. Try mysqli_real_escape_string() to protect an identifier, and it will silently go unnoticed, until the actual injection happen. Try a prepared statement for an identifier – and get an error.

php – mysqli_real_escape_string – example for 100% safety

EasyMock vs Mockito: design vs maintainability?

EasyMock vs Mockito: design vs maintainability?

I wont argue about test readability, size or testing techniques of these frameworks, I believe they are equal, but on a simple example Ill show you the difference.

Given: We have a class which is responsible for storing something somewhere:

public class Service {

    public static final String PATH = path;
    public static final String NAME = name;
    public static final String CONTENT = content;
    private FileDao dao;

    public void doSomething() {
        dao.store(PATH, NAME, IOUtils.toInputStream(CONTENT));
    }

    public void setDao(FileDao dao) {
        this.dao = dao;
    }
}

and we want to test it:

Mockito:

public class ServiceMockitoTest {

    private Service service;

    @Mock
    private FileDao dao;

    @Before
    public void setUp() {
        MockitoAnnotations.initMocks(this);
        service = new Service();
        service.setDao(dao);
    }

    @Test
    public void testDoSomething() throws Exception {
        // given
        // when
        service.doSomething();
        // then
        ArgumentCaptor<InputStream> captor = ArgumentCaptor.forClass(InputStream.class);
        Mockito.verify(dao, times(1)).store(eq(Service.PATH), eq(Service.NAME), captor.capture());
        assertThat(Service.CONTENT, is(IOUtils.toString(captor.getValue())));
    }
}

EasyMock:

public class ServiceEasyMockTest {
    private Service service;
    private FileDao dao;

    @Before
    public void setUp() {
        dao = EasyMock.createNiceMock(FileDao.class);
        service = new Service();
        service.setDao(dao);
    }

    @Test
    public void testDoSomething() throws Exception {
        // given
        Capture<InputStream> captured = new Capture<InputStream>();
        dao.store(eq(Service.PATH), eq(Service.NAME), capture(captured));
        replay(dao);
        // when
        service.doSomething();
        // then
        assertThat(Service.CONTENT, is(IOUtils.toString(captured.getValue())));
        verify(dao);
    }
}

As you can see both test are fairly the same and both of them are passing.
Now, let’s imagine that somebody else changed Service implementation and trying to run tests.

New Service implementation:

dao.store(PATH + separator, NAME, IOUtils.toInputStream(CONTENT));

separator was added at the end of PATH constant

How the tests results will look like right now ? First of all both tests will fail, but with different error messages:

EasyMock:

java.lang.AssertionError: Nothing captured yet
    at org.easymock.Capture.getValue(Capture.java:78)
    at ServiceEasyMockTest.testDoSomething(ServiceEasyMockTest.java:36)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)

Mockito:

Argument(s) are different! Wanted:
dao.store(
    path,
    name,
    <Capturing argument>
);
-> at ServiceMockitoTest.testDoSomething(ServiceMockitoTest.java:34)
Actual invocation has different arguments:
dao.store(
    path,
    name,
    [email protected]
);
-> at Service.doSomething(Service.java:13)

What happened in EasyMock test, why result wasnt captured ? Is store method wasnt executed, but wait a minute, it was, why EasyMock lies to us?

Its because EasyMock mixing two responsibilities in a single line – stubbing and verification. Thats why when something is wrong its hard to understand which part is causing failure.

Of course you can tell me – just change the test and move verify before assertion. Wow, are you serious, developers should keep in mind some magic order inforced by mocking framework?

By the way, it won’t help:

java.lang.AssertionError: 
  Expectation failure on verify:
    store(path, name, capture(Nothing captured yet)): expected: 1, actual: 0
    at org.easymock.internal.MocksControl.verify(MocksControl.java:111)
    at org.easymock.classextension.EasyMock.verify(EasyMock.java:211)

Still, it is saying to me that method was not executed, but it was, only with another parameters.

Why Mockito is better ? This framework doesnt mix two responsibilities in a single place and when your tests will fail, you will easily understand why.

if we care about the Design of the code then Easymock is the better choice as it gives feedback to you by its concept of expectations

Interesting. I found that concept of expectations makes many devs put more & more expectations in the tests only to satisfy UnexpectedMethodCall problem. How does it influence the design?

The test should not break when you change code. The test should break when the feature stops working. If one likes the tests to break when any code change happens I suggest to write a test that asserts the md5 checksum of the java file 🙂

EasyMock vs Mockito: design vs maintainability?

Im an EasyMock developer so a bit partial but of course Ive used EasyMock on large scale projects.

My opinion is that EasyMock tests will indeed breaks once in a while. EasyMock forces you to do a complete recording of what you expect. This requires some discipline. You should really record what is expected not what the tested method currently needs. For instance, if it doesnt matter how many time a method is called on a mock, dont be afraid of using andStubReturn. Also, if you dont care about a parameter, use anyObject() and so on. Thinking in TDD can help on that.

My analyze is that EasyMock tests will break more often but Mockito ones wont when you would want them to. I prefer my tests to break. At least Im aware of what was the impacts of my development. This is of course, my personal point of view.

java – Javaw.exe application not opening, WIndows 10 Pro Surface Pro 4

java – Javaw.exe application not opening, WIndows 10 Pro Surface Pro 4

This is because javaw points to the Java Virtual Machine. If you want to run the application – find the .jar it uses and then double-click that. Or use javaw to run said application:

  1. [WIN] + [R]
  2. Type cmd
  3. Enter: javaw -jar path-to-your-jar-file

javaw by itself will do nothing.

EDIT: You need to use the -jar option.

java – Javaw.exe application not opening, WIndows 10 Pro Surface Pro 4

c++ – Whats the logical difference between PostQuitMessage() and DestroyWindow()?

c++ – Whats the logical difference between PostQuitMessage() and DestroyWindow()?

DestroyWindow destroys the window (surprise) and posts a WM_DESTROY (youll also get a WM_NCDESTROY) to the message queue. This is the default behaviour of WM_CLOSE. However, just because a window was destroyed does not mean the message loop should end. This can be the case for having a specific window that ends the application when closed and others that do nothing to the application when closed (e.g., an options page).

PostQuitMessage posts a WM_QUIT to the message queue, often causing the message loop to end. For example, GetMessage will return 0 when it pulls a WM_QUIT out. This would usually be called in the WM_DESTROY handler for your main window. This is not default behaviour; you have to do it yourself.

Neither snippet is correct. The first one will do what the default window procedure already does when it processes the WM_CLOSE message so is superfluous. But doesnt otherwise make the application quit, it should keep running and youd normally have to force the debugger to stop with Debug + Stop Debugging. If you run it without a debugger then youll leave the process running but without a window so you cant tell it is still running. Use Taskmgr.exe, Processes tab to see those zombie processes.

The second snippet will terminate the app but will not clean up properly since you dont pass the WM_CLOSE message to the default window procedure. The window doesnt get destroyed. Albeit that the operating system will clean up for you so it does all come to a good end, just without any bonus points for elegance.

The proper way to do it is to quit when your main window is destroyed. Youll know about it from the WM_DESTROY notification thats sent when that happens:

case WM_DESTROY:
    PostQuitMessage(0);
    return 0;

c++ – Whats the logical difference between PostQuitMessage() and DestroyWindow()?

PostQuitMessage doesnt necessarily mean the end of application. It simply posts WM_QUIT to the message loop and allows you to exit from the message loop, so in most cases, this means the end of the application. However, in a multithread application, if you have the message loop for each thread created, PostQuitMessage only closes that thread.

As a side note, if you ever need more lines of code to execute after the message loop (such as further clean-up), PostQuitMessage is a better way to go, because DestroyWindow destroys the window without going through the message loop, ignoring whatever clean-up codes remaining after the message loop. Some may call it a not-so-good coding practice, but sometimes you cant avoid situations like that.

javascript – How to Unit Test a Directive In Angular 2?

javascript – How to Unit Test a Directive In Angular 2?

Testing compiled directive using TestBed

Lets say you have a following directive:

@Directive({
  selector: [my-directive],
})
class MyDirective {
  public directiveProperty = hi!;
}

What you have to do, is to create a component that uses the directive (it can be just for testing purpose):

@Component({
  selector: my-test-component,
  template: 
})
class TestComponent {}

Now you need to create a module that has them declared:

describe(App, () => {

  beforeEach(() => {
    TestBed.configureTestingModule({
      declarations: [
        TestComponent,
        MyDirective
      ]
    });
  });

  // ...

});

You can add the template (that contains directive) to the component, but it can be handled dynamically by overwriting the template in test:

it(should be able to test directive, async(() => {
  TestBed.overrideComponent(TestComponent, {
    set: {
      template: <div my-directive></div>
    }
  });

  // ...      

}));

Now you can try to compile the component, and query it using By.directive. At the very end, there is a possibility to get a directive instance using the injector:

TestBed.compileComponents().then(() => {
  const fixture = TestBed.createComponent(TestComponent);
  const directiveEl = fixture.debugElement.query(By.directive(MyDirective));
  expect(directiveEl).not.toBeNull();

  const directiveInstance = directiveEl.injector.get(MyDirective);
  expect(directiveInstance.directiveProperty).toBe(hi!);
}); 

# Old answer:

To test a directive you need to create a fake component with it:

@Component({
  selector: test-cmp,
  directives: [MyAttrDirective],
  template: 
})
class TestComponent {}

You can add the template in the component itself but it can be handled dynamically by overwriting the template in test:

it(Should setup with conversation, inject([TestComponentBuilder], (testComponentBuilder: TestComponentBuilder) => {
    return testComponentBuilder
      .overrideTemplate(TestComponent, `<div my-attr-directive></div>`)
      .createAsync(TestComponent)
      .then((fixture: ComponentFixture<TestComponent>) => {
        fixture.detectChanges();
        const directiveEl = fixture.debugElement.query(By.css([my-attr-directive]));
        expect(directiveEl.nativeElement).toBeDefined();
      });
  }));

Note that youre able to test what directive renders but I couldnt find the way to test a directive in a way components are (there is no TestComponentBuilder for directives).

Took me a while to find a good example, a good person on angular gitter channel pointed me to look at the Angular Material Design 2 repository for examples. You can find a Directive test example here. This is the test file for the tooltip directive of Material Design 2. It looks like you have to test it as part of a component.

javascript – How to Unit Test a Directive In Angular 2?

parsing – building a very simple parser in C

parsing – building a very simple parser in C

Put the standard headers at file scope, not a block scope:

 #include <stdio.h>

 int main(int argc, char *argv[])
 {
    ...

Suggested changes:

#include <stdio.h>

int main(int argc, char *argv[])
{
    FILE fp_input = NULL;
    FILE fp_ints = NULL;
    FILE fp_chars = NULL;
    FILE fp_floats = NULL;

    char flag;
    int ipint;
    char ipchar;
    float ipfloat;
    int exitStatus;

    if (!(fp_input= fopen(input.txt, r)) {
      perror (fp_input failed);
      return 1;
    }

    if (!(fp_ints = fopen(ints.txt w)) {
      ...
    if (fscanf(fp_input, %c, &flag)!= 1) { 
      ...

    while (exitStatus != EOF){
      switch (flag) {
        case I :
          fscanf(fp_input,%i,&ipint);
          fprintf(fp_ints, %d, ipint);
          break;
        case C :
          ...
        default :
          ...
    }

In other words:

1) The #include is in the wrong place

2) I would not use variable names like input.txt with a period in the name.

3) I think you meant constant I instead of the variable I

4) You should check for errors whenever/wherever possible (like fopen, fscanf, etc)

5) You need a format string for your fprintf()

parsing – building a very simple parser in C

  • Use switch statements rather than multiple if
while (exitStatus != EOF)
{
    switch (flag) {

        case I:
            //...
            break;

        case C:
            //...
            break;

        case F:
            //...
            break;

        default:
            puts(Flag not recognized);
            return EXIT_FAILURE;
        }
}
  • fprintf is the same as printf and only difference is that you get to decide stdout, so character formatting is still required
  • Variable names cannot have . character in them as this is reserved for accessing members of an object
  • exitStatus needs to be updated at each iteration so that the program will know when to stop reading from the file. I used fgetc and ungetc for that

This code should do what you need:

#include <stdio.h>
#include <stdlib.h>

int main(int argc, char *argv[])
{
    FILE *input = fopen(input.txt, r);
    FILE *ints = fopen(ints.txt, w+);
    FILE *chars = fopen(chars.txt, w+);
    FILE *floats = fopen(floats.txt, w+);

    int flag, ipint, exitStatus;
    char ipchar;
    float ipfloat;

    if (NULL == input) {
        perror(File not found [input.txt]);
        return EXIT_FAILURE;
    }

    while ((exitStatus = fgetc(input)) != EOF && ungetc(exitStatus, input))
    {
        fscanf(input, %d, &flag);
        switch (flag) {

            case I:
                fscanf(input, %i, &ipint);
                fprintf(ints, %i, ipint);
                break;

            case C:
                fscanf(input, %c, &ipchar);
                fprintf(chars, %c, ipchar);

                break;

            case F:
                fscanf(input, %f, &ipfloat);
                fprintf(floats, %f, ipfloat);
                break;

            default:
                puts(Flag not recognized);
                fclose(input);
                fclose(ints);
                fclose(floats);
                fclose(chars);
                return EXIT_FAILURE;
        }

    }
    fclose(input);
    fclose(ints);
    fclose(floats);
    fclose(chars);

    return EXIT_SUCCESS;
}

python – How to make a 4d plot with matplotlib using arbitrary data

python – How to make a 4d plot with matplotlib using arbitrary data

Great question Tengis, all the math folks love to show off the flashy surface plots with functions given, while leaving out dealing with real world data. The sample code you provided uses gradients since the relationships of a variables are modeled using functions. For this example I will generate random data using a standard normal distribution.

Anyways here is how you can quickly plot 4D random (arbitrary) data with first three variables are on the axis and the fourth being color:

from mpl_toolkits.mplot3d import Axes3D
import matplotlib.pyplot as plt
import numpy as np

fig = plt.figure()
ax = fig.add_subplot(111, projection=3d)

x = np.random.standard_normal(100)
y = np.random.standard_normal(100)
z = np.random.standard_normal(100)
c = np.random.standard_normal(100)

img = ax.scatter(x, y, z, c=c, cmap=plt.hot())
fig.colorbar(img)
plt.show()

Note: A heatmap with the hot color scheme (yellow to red) was used for the 4th dimension

Result:

src=https://i.stack.imgur.com/1O3aI.png/]1

I know that the question is very old, but I would like to present this alternative where, instead of using the scatter plot, we have a 3D surface diagram where the colors are based on the 4th dimension. Personally I dont really see the spatial relation in the case of the scatter plot and so using 3D surface help me to more easily understand the graphic.

The main idea is the same than the accepted answer, but we have a 3D graph of the surface that allows to visually better see the distance between the points. The following code here is mainly based on the answer given to this question.

import numpy as np
from mpl_toolkits.mplot3d import Axes3D
import matplotlib.pyplot as plt
import matplotlib.tri as mtri

# The values ​​related to each point. This can be a Dataframe pandas 
# for example where each column is linked to a variable <-> 1 dimension. 
# The idea is that each line = 1 pt in 4D.
do_random_pt_example = True;

index_x = 0; index_y = 1; index_z = 2; index_c = 3;
list_name_variables = [x, y, z, c];
name_color_map = seismic;

if do_random_pt_example:
    number_of_points = 200;
    x = np.random.rand(number_of_points);
    y = np.random.rand(number_of_points);
    z = np.random.rand(number_of_points);
    c = np.random.rand(number_of_points);
else:
    # Example where we have a Pandas Dataframe where each line = 1 pt in 4D.
    # We assume here that the data frame df has already been loaded before.
    x = df[list_name_variables[index_x]]; 
    y = df[list_name_variables[index_y]]; 
    z = df[list_name_variables[index_z]]; 
    c = df[list_name_variables[index_c]];
#end
#-----

# We create triangles that join 3 pt at a time and where their colors will be
# determined by the values ​​of their 4th dimension. Each triangle contains 3
# indexes corresponding to the line number of the points to be grouped. 
# Therefore, different methods can be used to define the value that 
# will represent the 3 grouped points and I put some examples.
triangles = mtri.Triangulation(x, y).triangles;

choice_calcuation_colors = 1;
if choice_calcuation_colors == 1: # Mean of the c values of the 3 pt of the triangle
    colors = np.mean( [c[triangles[:,0]], c[triangles[:,1]], c[triangles[:,2]]], axis = 0);
elif choice_calcuation_colors == 2: # Mediane of the c values of the 3 pt of the triangle
    colors = np.median( [c[triangles[:,0]], c[triangles[:,1]], c[triangles[:,2]]], axis = 0);
elif choice_calcuation_colors == 3: # Max of the c values of the 3 pt of the triangle
    colors = np.max( [c[triangles[:,0]], c[triangles[:,1]], c[triangles[:,2]]], axis = 0);
#end
#----------
# Displays the 4D graphic.
fig = plt.figure();
ax = fig.gca(projection=3d);
triang = mtri.Triangulation(x, y, triangles);
surf = ax.plot_trisurf(triang, z, cmap = name_color_map, shade=False, linewidth=0.2);
surf.set_array(colors); surf.autoscale();

#Add a color bar with a title to explain which variable is represented by the color.
cbar = fig.colorbar(surf, shrink=0.5, aspect=5);
cbar.ax.get_yaxis().labelpad = 15; cbar.ax.set_ylabel(list_name_variables[index_c], rotation = 270);

# Add titles to the axes and a title in the figure.
ax.set_xlabel(list_name_variables[index_x]); ax.set_ylabel(list_name_variables[index_y]);
ax.set_zlabel(list_name_variables[index_z]);
plt.title(%s in function of %s, %s and %s % (list_name_variables[index_c], list_name_variables[index_x], list_name_variables[index_y], list_name_variables[index_z]) );

plt.show();

Example

Another solution for the case where we absolutely want to have the original values ​​of the 4th dimension for each point is simply to use the scatter plot combined with a 3D surface diagram that will simply link them to help you see the distances between them.

name_color_map_surface = Greens;  # Colormap for the 3D surface only.

fig = plt.figure(); 
ax = fig.add_subplot(111, projection=3d);
ax.set_xlabel(list_name_variables[index_x]); ax.set_ylabel(list_name_variables[index_y]);
ax.set_zlabel(list_name_variables[index_z]);
plt.title(%s in fcn of %s, %s and %s % (list_name_variables[index_c], list_name_variables[index_x], list_name_variables[index_y], list_name_variables[index_z]) );

# In this case, we will have 2 color bars: one for the surface and another for 
# the scatter plot.
# For example, we can place the second color bar under or to the left of the figure.
choice_pos_colorbar = 2;

#The scatter plot.
img = ax.scatter(x, y, z, c = c, cmap = name_color_map);
cbar = fig.colorbar(img, shrink=0.5, aspect=5); # Default location is at the right of the figure.
cbar.ax.get_yaxis().labelpad = 15; cbar.ax.set_ylabel(list_name_variables[index_c], rotation = 270);

# The 3D surface that serves only to connect the points to help visualize 
# the distances that separates them.
# The alpha is used to have some transparency in the surface.
surf = ax.plot_trisurf(x, y, z, cmap = name_color_map_surface, linewidth = 0.2, alpha = 0.25);

# The second color bar will be placed at the left of the figure.
if choice_pos_colorbar == 1: 
    #I am trying here to have the two color bars with the same size even if it 
    #is currently set manually.
    cbaxes = fig.add_axes([1-0.78375-0.1, 0.3025, 0.0393823, 0.385]);  # Case without tigh layout.
    #cbaxes = fig.add_axes([1-0.844805-0.1, 0.25942, 0.0492187, 0.481161]); # Case with tigh layout.

    cbar = plt.colorbar(surf, cax = cbaxes, shrink=0.5, aspect=5);
    cbar.ax.get_yaxis().labelpad = 15; cbar.ax.set_ylabel(list_name_variables[index_z], rotation = 90);

# The second color bar will be placed under the figure.
elif choice_pos_colorbar == 2: 
    cbar = fig.colorbar(surf, shrink=0.75, aspect=20,pad = 0.05, orientation = horizontal);
    cbar.ax.get_yaxis().labelpad = 15; cbar.ax.set_xlabel(list_name_variables[index_z], rotation = 0);
#end
plt.show();

Sample

Finally, it is also possible to use plot_surface where we define the color that will be used for each face. In a case like this where we have 1 vector of values ​​per dimension, the problem is that we have to interpolate the values ​​to get 2D grids. In the case of interpolation of the 4th dimension, it will be defined only according to X-Y and Z will not be taken into account. As a result, the colors represent C (x, y) instead of C (x, y, z). The following code is mainly based on the following responses: plot_surface with a 1D vector for each dimension; plot_surface with a selected color for each surface. Note that the calculation is quite heavy compared to previous solutions and the display may take a little time.

import matplotlib
from scipy.interpolate import griddata

# X-Y are transformed into 2D grids. Its like a form of interpolation
x1 = np.linspace(x.min(), x.max(), len(np.unique(x))); 
y1 = np.linspace(y.min(), y.max(), len(np.unique(y)));
x2, y2 = np.meshgrid(x1, y1);

# Interpolation of Z: old X-Y to the new X-Y grid.
# Note: Sometimes values ​​can be < z.min and so it may be better to set 
# the values too low to the true minimum value.
z2 = griddata( (x, y), z, (x2, y2), method=cubic, fill_value = 0);
z2[z2 < z.min()] = z.min();

# Interpolation of C: old X-Y on the new X-Y grid (as we did for Z)
# The only problem is the fact that the interpolation of C does not take
# into account Z and that, consequently, the representation is less 
# valid compared to the previous solutions.
c2 = griddata( (x, y), c, (x2, y2), method=cubic, fill_value = 0);
c2[c2 < c.min()] = c.min(); 

#--------
color_dimension = c2; # It must be in 2D - as for X, Y, Z.
minn, maxx = color_dimension.min(), color_dimension.max();
norm = matplotlib.colors.Normalize(minn, maxx);
m = plt.cm.ScalarMappable(norm=norm, cmap = name_color_map);
m.set_array([]);
fcolors = m.to_rgba(color_dimension);

# At this time, X-Y-Z-C are all 2D and we can use plot_surface.
fig = plt.figure(); ax = fig.gca(projection=3d);
surf = ax.plot_surface(x2, y2, z2, facecolors = fcolors, linewidth=0, rstride=1, cstride=1,
                       antialiased=False);
cbar = fig.colorbar(m, shrink=0.5, aspect=5);
cbar.ax.get_yaxis().labelpad = 15; cbar.ax.set_ylabel(list_name_variables[index_c], rotation = 270);
ax.set_xlabel(list_name_variables[index_x]); ax.set_ylabel(list_name_variables[index_y]);
ax.set_zlabel(list_name_variables[index_z]);
plt.title(%s in fcn of %s, %s and %s % (list_name_variables[index_c], list_name_variables[index_x], list_name_variables[index_y], list_name_variables[index_z]) );
plt.show();

Example

python – How to make a 4d plot with matplotlib using arbitrary data

I would like to add my two cents. Given a three-dimensional matrix where every entry represents a certain quantity, we can create a pseudo four-dimensional plot using Numpys unravel_index() function in combination with Matplotlibs scatter() method.

import numpy as np
import matplotlib.pyplot as plt


def plot4d(data):
    fig = plt.figure(figsize=(5, 5))
    ax = fig.add_subplot(projection=3d)
    ax.xaxis.pane.fill = False
    ax.yaxis.pane.fill = False
    ax.zaxis.pane.fill = False
    mask = data > 0.01
    idx = np.arange(int(np.prod(data.shape)))
    x, y, z = np.unravel_index(idx, data.shape)
    ax.scatter(x, y, z, c=data.flatten(), s=10.0 * mask, edgecolor=face, alpha=0.2, marker=o, cmap=magma, linewidth=0)
    plt.tight_layout()
    plt.savefig(test_scatter_4d.png, dpi=250)
    plt.close(fig)


if __name__ == __main__:
    X = np.arange(-10, 10, 0.5)
    Y = np.arange(-10, 10, 0.5)
    Z = np.arange(-10, 10, 0.5)
    X, Y, Z = np.meshgrid(X, Y, Z, indexing=ij)
    density_matrix = np.sin(np.sqrt(X**2 + Y**2 + Z**2))
    plot4d(density_matrix)

enter

c++ – Understanding Linux virtual memory: valgrinds massif output shows major differences with and without –pages-as-heap

c++ – Understanding Linux virtual memory: valgrinds massif output shows major differences with and without –pages-as-heap

Ill try to write a short summary of what I learned, while trying to figure out whats happening.
Note: this answer is possible thanks to @Lawrence – appreciated!


Long story short

This has absolutely nothing to do with Linux/kernel (virtual) memory management, nor with std::string.
Its all about the glibcs memory allocator – it just allocates huge areas of memory on the first (and not only, of course) dynamic allocation (per thread).


Details

MCVE

#include <thread>
#include <vector>
#include <chrono>

int main() {
    std::vector<std::thread> workers;
    for( unsigned i = 0; i < 192; ++i )
        workers.emplace_back([]{
            const auto x = std::make_unique<int>(rand());
            while (true) std::this_thread::sleep_for(std::chrono::seconds(1));});
    workers.back().join();
}

Please ignore the crappy handling of the threads, I wanted this to be as short as possible.

Commands

Compile: g++ --std=c++14 -fno-inline -g3 -O0 -pthread test.cpp.
Profile: valgrind --tool=massif --pages-as-heap=[no|yes] ./a.out

Memory usage

top shows 7815012 KiB virtual memory.
pmap also shows 7815016 KiB virtual memory.
Similar result is shown by massif with pages-as-heap=yes: 7817088 KiB, see below.
On the other hand, massif with pages-as-heap=no is drastically different – around 133 KiB!

Massif output with pages-as-heap=yes

Memory usage before killing the program:

100.00% (8,004,698,112B) (page allocation syscalls) mmap/mremap/brk, --alloc-fns, etc.
->99.78% (7,986,741,248B) 0x54E0679: mmap (mmap.c:34)
| ->46.11% (3,690,987,520B) 0x545C3CF: new_heap (arena.c:438)
| | ->46.11% (3,690,987,520B) 0x545CC1F: arena_get2.part.3 (arena.c:646)
| |   ->46.11% (3,690,987,520B) 0x5463248: malloc (malloc.c:2911)
| |     ->46.11% (3,690,987,520B) 0x4CB7E76: operator new(unsigned long) (in /usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.21)
| |       ->46.11% (3,690,987,520B) 0x4026D0: std::_MakeUniq<int>::__single_object std::make_unique<int, int>(int&&) (unique_ptr.h:765)
| |         ->46.11% (3,690,987,520B) 0x400EC5: main::{lambda()
| |           ->46.11% (3,690,987,520B) 0x40225C: void std::_Bind_simple<main::{lambda()
| |             ->46.11% (3,690,987,520B) 0x402194: std::_Bind_simple<main::{lambda()
| |               ->46.11% (3,690,987,520B) 0x402102: std::thread::_Impl<std::_Bind_simple<main::{lambda()
| |                 ->46.11% (3,690,987,520B) 0x4CE2C7E: ??? (in /usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.21)
| |                   ->46.11% (3,690,987,520B) 0x51C96B8: start_thread (pthread_create.c:333)
| |                     ->46.11% (3,690,987,520B) 0x54E63DB: clone (clone.S:109)
| |                       
| ->33.53% (2,684,354,560B) 0x545C35B: new_heap (arena.c:427)
| | ->33.53% (2,684,354,560B) 0x545CC1F: arena_get2.part.3 (arena.c:646)
| |   ->33.53% (2,684,354,560B) 0x5463248: malloc (malloc.c:2911)
| |     ->33.53% (2,684,354,560B) 0x4CB7E76: operator new(unsigned long) (in /usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.21)
| |       ->33.53% (2,684,354,560B) 0x4026D0: std::_MakeUniq<int>::__single_object std::make_unique<int, int>(int&&) (unique_ptr.h:765)
| |         ->33.53% (2,684,354,560B) 0x400EC5: main::{lambda()
| |           ->33.53% (2,684,354,560B) 0x40225C: void std::_Bind_simple<main::{lambda()
| |             ->33.53% (2,684,354,560B) 0x402194: std::_Bind_simple<main::{lambda()
| |               ->33.53% (2,684,354,560B) 0x402102: std::thread::_Impl<std::_Bind_simple<main::{lambda()
| |                 ->33.53% (2,684,354,560B) 0x4CE2C7E: ??? (in /usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.21)
| |                   ->33.53% (2,684,354,560B) 0x51C96B8: start_thread (pthread_create.c:333)
| |                     ->33.53% (2,684,354,560B) 0x54E63DB: clone (clone.S:109)
| |                       
| ->20.13% (1,611,399,168B) 0x51CA1D4: [email protected]@GLIBC_2.2.5 (allocatestack.c:513)
|   ->20.13% (1,611,399,168B) 0x4CE2DC1: std::thread::_M_start_thread(std::shared_ptr<std::thread::_Impl_base>, void (*)()) (in /usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.21)
|     ->20.13% (1,611,399,168B) 0x4CE2ECB: std::thread::_M_start_thread(std::shared_ptr<std::thread::_Impl_base>) (in /usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.21)
|       ->20.13% (1,611,399,168B) 0x40139A: std::thread::thread<main::{lambda()
|         ->20.13% (1,611,399,168B) 0x4012AE: _ZN9__gnu_cxx13new_allocatorISt6threadE9constructIS1_IZ4mainEUlvE_EEEvPT_DpOT0_ (new_allocator.h:120)
|           ->20.13% (1,611,399,168B) 0x401075: _ZNSt16allocator_traitsISaISt6threadEE9constructIS0_IZ4mainEUlvE_EEEvRS1_PT_DpOT0_ (alloc_traits.h:527)
|             ->19.19% (1,535,864,832B) 0x401009: void std::vector<std::thread, std::allocator<std::thread> >::emplace_back<main::{lambda()
|             | ->19.19% (1,535,864,832B) 0x400F47: main (test.cpp:10)
|             |   
|             ->00.94% (75,534,336B) in 1+ places, all below ms_prints threshold (01.00%)
|             
->00.22% (17,956,864B) in 1+ places, all below ms_prints threshold (01.00%)

Massif output with pages-as-heap=no

Memory usage before killing the program:

--------------------------------------------------------------------------------
  n        time(i)         total(B)   useful-heap(B) extra-heap(B)    stacks(B)
--------------------------------------------------------------------------------
 68      2,793,125          143,280          136,676         6,604            0
95.39% (136,676B) (heap allocation functions) malloc/new/new[], --alloc-fns, etc.
->50.74% (72,704B) 0x4EBAEFE: ??? (in /usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.21)
| ->50.74% (72,704B) 0x40106B8: call_init.part.0 (dl-init.c:72)
|   ->50.74% (72,704B) 0x40107C9: _dl_init (dl-init.c:30)
|     ->50.74% (72,704B) 0x4000C68: ??? (in /lib/x86_64-linux-gnu/ld-2.23.so)
|       
->36.58% (52,416B) 0x40138A3: _dl_allocate_tls (dl-tls.c:322)
| ->36.58% (52,416B) 0x53D126D: [email protected]@GLIBC_2.2.5 (allocatestack.c:588)
|   ->36.58% (52,416B) 0x4EE9DC1: std::thread::_M_start_thread(std::shared_ptr<std::thread::_Impl_base>, void (*)()) (in /usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.21)
|     ->36.58% (52,416B) 0x4EE9ECB: std::thread::_M_start_thread(std::shared_ptr<std::thread::_Impl_base>) (in /usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.21)
|       ->36.58% (52,416B) 0x40139A: std::thread::thread<main::{lambda()
|         ->36.58% (52,416B) 0x4012AE: _ZN9__gnu_cxx13new_allocatorISt6threadE9constructIS1_IZ4mainEUlvE_EEEvPT_DpOT0_ (new_allocator.h:120)
|           ->36.58% (52,416B) 0x401075: _ZNSt16allocator_traitsISaISt6threadEE9constructIS0_IZ4mainEUlvE_EEEvRS1_PT_DpOT0_ (alloc_traits.h:527)
|             ->34.77% (49,824B) 0x401009: void std::vector<std::thread, std::allocator<std::thread> >::emplace_back<main::{lambda()
|             | ->34.77% (49,824B) 0x400F47: main (test.cpp:10)
|             |   
|             ->01.81% (2,592B) 0x4010FF: void std::vector<std::thread, std::allocator<std::thread> >::_M_emplace_back_aux<main::{lambda()
|               ->01.81% (2,592B) 0x40103D: void std::vector<std::thread, std::allocator<std::thread> >::emplace_back<main::{lambda()
|                 ->01.81% (2,592B) 0x400F47: main (test.cpp:10)
|                   
->06.13% (8,784B) 0x401B4B: __gnu_cxx::new_allocator<std::_Sp_counted_ptr_inplace<std::thread::_Impl<std::_Bind_simple<main::{lambda()
| ->06.13% (8,784B) 0x401A60: std::allocator_traits<std::allocator<std::_Sp_counted_ptr_inplace<std::thread::_Impl<std::_Bind_simple<main::{lambda()
|   ->06.13% (8,784B) 0x40194D: std::__shared_count<(__gnu_cxx::_Lock_policy)2>::__shared_count<std::thread::_Impl<std::_Bind_simple<main::{lambda()
|     ->06.13% (8,784B) 0x401894: std::__shared_ptr<std::thread::_Impl<std::_Bind_simple<main::{lambda()
|       ->06.13% (8,784B) 0x40183A: std::shared_ptr<std::thread::_Impl<std::_Bind_simple<main::{lambda()
|         ->06.13% (8,784B) 0x4017C7: std::shared_ptr<std::thread::_Impl<std::_Bind_simple<main::{lambda()
|           ->06.13% (8,784B) 0x4016AB: std::shared_ptr<std::thread::_Impl<std::_Bind_simple<main::{lambda()
|             ->06.13% (8,784B) 0x40155E: std::shared_ptr<std::thread::_Impl<std::_Bind_simple<main::{lambda()
|               ->06.13% (8,784B) 0x401374: std::thread::thread<main::{lambda()
|                 ->06.13% (8,784B) 0x4012AE: _ZN9__gnu_cxx13new_allocatorISt6threadE9constructIS1_IZ4mainEUlvE_EEEvPT_DpOT0_ (new_allocator.h:120)
|                   ->06.13% (8,784B) 0x401075: _ZNSt16allocator_traitsISaISt6threadEE9constructIS0_IZ4mainEUlvE_EEEvRS1_PT_DpOT0_ (alloc_traits.h:527)
|                     ->05.83% (8,352B) 0x401009: void std::vector<std::thread, std::allocator<std::thread> >::emplace_back<main::{lambda()
|                     | ->05.83% (8,352B) 0x400F47: main (test.cpp:10)
|                     |   
|                     ->00.30% (432B) in 1+ places, all below ms_prints threshold (01.00%)
|                     
->01.43% (2,048B) 0x403432: __gnu_cxx::new_allocator<std::thread>::allocate(unsigned long, void const*) (new_allocator.h:104)
| ->01.43% (2,048B) 0x4032CF: std::allocator_traits<std::allocator<std::thread> >::allocate(std::allocator<std::thread>&, unsigned long) (alloc_traits.h:488)
|   ->01.43% (2,048B) 0x4030B8: std::_Vector_base<std::thread, std::allocator<std::thread> >::_M_allocate(unsigned long) (stl_vector.h:170)
|     ->01.43% (2,048B) 0x4010B6: void std::vector<std::thread, std::allocator<std::thread> >::_M_emplace_back_aux<main::{lambda()
|       ->01.43% (2,048B) 0x40103D: void std::vector<std::thread, std::allocator<std::thread> >::emplace_back<main::{lambda()
|         ->01.43% (2,048B) 0x400F47: main (test.cpp:10)
|           
->00.51% (724B) in 1+ places, all below ms_prints threshold (01.00%)

What the freak happens?

pages-as-heap=no

With pages-as-heap=no the things look reasonable – lets not inspect it. As expected, everything ends up with malloc/new/new[] and the memory usage is small enough not to worry us – these are the high level allocations.

pages-as-heap=yes

But see pages-as-heap=yes? ~8GiB virtual memory with this simple code?

Lets inspect the stack traces.

pthread_create

Lets start with the easier one: the one, that ends with pthread_create.

massif reports 1,611,399,168 bytes of allocated memory – this is exactly 192 * 8196 KiB, meaning – 192 threads * 8MiB, which is the default max stack size of a thread in Linux.

Note, that 8196 KiB is not exactly 8 MiB (8192 KiB). I dont know where this difference comes from, but its not significant at the moment.

std::make_unique<int>

OK, lets now see the other two stacks… wait, they are exactly the same? Yeah, massifs documentation explains this, I didnt completely understand it, but its also not significant. They show exactly the same stack. Lets combine the results and examine them together.

The memory usage from these two stacks combined is 6375342080 bytes and all of them are caused by our simple std::make_unique<int>!

Lets take a step back: if we run the same experiment, but with a simple thread, we will see, that this int allocation causes allocating 67108864 bytes of memory, which is exactly 64 MB. What happens??

It all comes down to the implementation of malloc (as we all know, that new/new[] is internally implemented with malloc.. by default).

malloc internally uses a memory allocator, called ptmalloc2 – the default memory allocator in Linux, that supports threads.

Simply put, this allocator deals with the following terms:

  • per thread arena: a huge area of memory; usually per thread, for performance reasons; not all software threads have their own per-thread-arenas, this usually depends on the number of hardware threads (and other details, I guess);
  • heap: the arenas are divided into heaps;
  • chunks: the heaps are divided into smaller areas of memory, called chunks.

There are a lot of details about these things, will post some interesting links a bit later, although this should be enough for the reader to do their own research – these are really low-level and deep things, related to C++ memory management.

So, lets go back to our test with a single thread – allocated 64 MiB for a single int?? Lets see again the stack trace and concentrate at its end:

mmap (mmap.c:34)
new_heap (arena.c:438)
arena_get2.part.3 (arena.c:646)
malloc (malloc.c:2911)

Surprise, surprise: malloc calls arena_get2, which calls new_heap, which leads us to mmap (mmap and brk are the low level system calls, used for memory allocation in Linux). And this is reported to allocate exactly 64 MiB memory.

OK, lets now go back to our original example with the 192 threads and our huge number 6375342080 – this is exactly 95 * 64 MiB!

Why exactly 95 – I cant really say, I stopped digging, but the fact, that the big number is divisible to 64 MiB was good enough for me.

You can dig a lot deeper, if necessary.

Useful links

Really cool explanatory article: Understanding glibc malloc, by sploitfun

A more formal/official documentation: The GNU allocator

A cool stack exchange question: How does glibc malloc works

Others:

If some of these links are broken at the moment of reading this post, it should be fairly easy to find similar articles. This topic is very popular, if you know what to look for and how.

Thanks

I hope these observations give good high-level description of the whole picture and also give enough food for further extended research.

Feel free to comment / (suggest) edit / correct / extend / etc.

massif with --pages-as-heap=yes and the top column you are observing both measure the virtual memory used by a process. This virtual memory includes all space mmapd in the implementation of malloc and during the creation of threads. For example, the default stack size for a thread will be 8192k which is reflected in the creation of each thread and contributes to the virtual memory footprint.

The specific allocation scheme will be dependent on implementation but it seems that the first heap allocation on a new thread will mmap roughly 65 megabytes of space. This can be viewed by looking at the pmap output for a process.

Excerpt from a very similar program to the example:

75170:   ./a.out
0000000000400000     24K r-x-- a.out
0000000000605000      4K r---- a.out
0000000000606000      4K rw--- a.out
0000000001b6a000    200K rw---   [ anon ]
00007f669dfa4000      4K -----   [ anon ]
00007f669dfa5000   8192K rw---   [ anon ]
00007f669e7a5000      4K -----   [ anon ]
00007f669e7a6000   8192K rw---   [ anon ]
00007f669efa6000      4K -----   [ anon ]
00007f669efa7000   8192K rw---   [ anon ]
...
00007f66cb800000   8192K rw---   [ anon ]
00007f66cc000000    132K rw---   [ anon ]
00007f66cc021000  65404K -----   [ anon ]
00007f66d0000000    132K rw---   [ anon ]
00007f66d0021000  65404K -----   [ anon ]
00007f66d4000000    132K rw---   [ anon ]
00007f66d4021000  65404K -----   [ anon ]
...
00007f6880586000   8192K rw---   [ anon ]
00007f6880d86000   1056K r-x-- libm-2.23.so
00007f6880e8e000   2044K ----- libm-2.23.so
...
00007f6881c08000      4K r---- libpthread-2.23.so
00007f6881c09000      4K rw--- libpthread-2.23.so
00007f6881c0a000     16K rw---   [ anon ]
00007f6881c0e000    152K r-x-- ld-2.23.so
00007f6881e09000     24K rw---   [ anon ]
00007f6881e33000      4K r---- ld-2.23.so
00007f6881e34000      4K rw--- ld-2.23.so
00007f6881e35000      4K rw---   [ anon ]
00007ffe9d75b000    132K rw---   [ stack ]
00007ffe9d7f8000     12K r----   [ anon ]
00007ffe9d7fb000      8K r-x--   [ anon ]
ffffffffff600000      4K r-x--   [ anon ]
 total          7815008K

It seems that malloc becomes more conservative as you approach some threshold of virtual memory per process. Also, my comment about libraries being mapped separately was misguided (they should be shared per process)

c++ – Understanding Linux virtual memory: valgrinds massif output shows major differences with and without –pages-as-heap

This is only a kind of answer (from the Valgrind perspective). The problem of memory pools, in particular with C++ strings, has been known for some time. The Valgrind manual has a section on leaks in C++ strings, suggesting you try to set the GLIBCXX_FORCE_NEW environment variable.

Additionally, for GCC6 and later, Valgrind has added hooks to cleanup still reachable memory in libstdc++. The Valgrind bugzilla entry is here and the GCC one is here.

I dont see why such small allocations blow up to so many gigabytes (over 12 Gbytes for a 64bit executable, CentOS 6.6, GCC 6.2).