{"id":381,"date":"2021-07-26T00:00:00","date_gmt":"2021-07-26T00:00:00","guid":{"rendered":"https:\/\/tac.debuzzify.com\/?p=381"},"modified":"2023-06-27T05:31:24","modified_gmt":"2023-06-27T05:31:24","slug":"how-to-detect-memory-leakage-in-your-python-application","status":"publish","type":"post","link":"https:\/\/www.the-analytics.club\/how-to-detect-memory-leakage-in-your-python-application\/","title":{"rendered":"How to Detect Memory Leakage in Your Python Application"},"content":{"rendered":"\n

Standard Python libraries that could tell the memory usage and execution time of every line<\/i><\/b><\/p>\n\n\n\n


\n\n\n\n\n\n

It\u2019s interesting to see how we improved measuring algorithm performance in Python. About a decade ago, when I started coding in Python, I stored time into variables at different points in my code. It is the ugliest way, for sure, but at that time, I thought I was smart.<\/p>\n\n\n\n

A couple of years later, when I learned to use decorators in Python, I created a function to do the same. I thought I got smarter.<\/p>\n\n\n\n

But the Python ecosystem has grown huge in the last decade. Its applications spread beyond data science and web app development. Along with this evolution, we improved the ways to do performance audits in Python.<\/p>\n\n\n\n

The need for a more accurate measure of resource usage is high in the era of cloud computing. If you\u2019re using AWS, Azure, G-Cloud, or any other cloud infrastructure, often you\u2019ll have to pay for resource hours.<\/p>\n\n\n\n

Also, Python is the prevalent language for data-intensive applications such as machine learning and distributed computing. Thus, understanding profiling and performance auditing is essential for every Python programmer.<\/p>\n\n\n\n

Before moving on, let\u2019s also discuss the old-school methods I\u2019ve been using for years.<\/p>\n\n\n\n

\n
\n
\n

Grab your aromatic coffee <\/a>(or tea<\/a>) and get ready…!<\/p>\n<\/div>\n<\/div>\n<\/div>\n\n\n\n

The old-school methods I\u2019ll never use again.<\/h2>\n\n\n\n

This method was my approach when I first started programming. I store the time values before and after the execution of a function. The difference is how long the process ran.<\/p>\n\n\n\n

The below code snippet counts the number of prime numbers lesser than the input value. In the function at the beginning and at the end, I\u2019ve written codes to capture time and calculate duration. If I need to code another function that requires a performance audit, I\u2019ll have to do the same again.<\/p>\n\n\n\n

<\/circle><\/circle><\/circle><\/g><\/svg><\/span><\/path><\/path><\/svg><\/span>
from<\/span> time <\/span>import<\/span> time<\/span><\/span>\n<\/span>\ndef<\/span> <\/span>count_primes<\/span>(<\/span>max_num<\/span>):<\/span><\/span>\n    <\/span>"""<\/span>This function counts of prime numbers below the input value.<\/span><\/span>\n    Input values are in thousands, ie. 40, is 40,000.<\/span><\/span>\n    <\/span>"""<\/span><\/span>\n    t1 <\/span>=<\/span> <\/span>time<\/span>()<\/span><\/span>\n    count <\/span>=<\/span> <\/span>0<\/span><\/span>\n    <\/span>for<\/span> num <\/span>in<\/span> <\/span>range<\/span>(<\/span>max_num <\/span>*<\/span> <\/span>1000<\/span> <\/span>+<\/span> <\/span>1<\/span>):<\/span><\/span>\n        <\/span>if<\/span> num <\/span>><\/span> <\/span>1<\/span>:<\/span><\/span>\n            <\/span>for<\/span> i <\/span>in<\/span> <\/span>range<\/span>(<\/span>2<\/span>,<\/span> num<\/span>):<\/span><\/span>\n                <\/span>if<\/span> num <\/span>%<\/span> i <\/span>==<\/span> <\/span>0<\/span>:<\/span><\/span>\n                    <\/span>break<\/span><\/span>\n                <\/span>else<\/span>:<\/span><\/span>\n                    count <\/span>+=<\/span> <\/span>1<\/span><\/span>\n    t2 <\/span>=<\/span> <\/span>time<\/span>()<\/span><\/span>\n    <\/span>print<\/span>(<\/span>f<\/span>"Counting prime numbers took <\/span>{<\/span>t2<\/span>-<\/span>t1<\/span>}<\/span> seconds"<\/span>)<\/span><\/span>\n    <\/span>return<\/span> count<\/span><\/span>\n<\/span>\n<\/span>\nprint<\/span>(<\/span>count_primes<\/span>(<\/span>20<\/span>))<\/span><\/span><\/code><\/pre>Python<\/span><\/div>\n\n\n\n

I used this method for several years. The biggest problem I had was my codebase filling up with lines that snapshot time. Even on a small-scale project, these repetitive lines are annoying. It reduces the code\u2019s readability and makes debugging a nightmare.<\/p>\n\n\n\n

I was excited when I learned about decorators. They could make my Python codes pretty again. I only have to put a decorator on top of each function.<\/p>\n\n\n\n

A decorator takes a function, adds some functionality, and returns the modified one. Here is mine that calculates and prints the execution times.<\/p>\n\n\n\n

<\/circle><\/circle><\/circle><\/g><\/svg><\/span><\/path><\/path><\/svg><\/span>
from<\/span> time <\/span>import<\/span> time<\/span><\/span>\n<\/span>\ndef<\/span> <\/span>taimr<\/span>(<\/span>func<\/span>):<\/span><\/span>\n    <\/span>def<\/span> <\/span>inner<\/span>(<\/span>*<\/span>args<\/span>,<\/span> <\/span>**<\/span>kwargs<\/span>):<\/span><\/span>\n        t1 <\/span>=<\/span> <\/span>time<\/span>()<\/span><\/span>\n        res <\/span>=<\/span> <\/span>func<\/span>(<\/span>*<\/span>args<\/span>,<\/span> <\/span>**<\/span>kwargs<\/span>)<\/span><\/span>\n        t2 <\/span>=<\/span> <\/span>time<\/span>()<\/span><\/span>\n<\/span>\n        <\/span>print<\/span>(<\/span>f<\/span>"Your function executino took <\/span>{<\/span>t2<\/span>-<\/span>t1<\/span>}<\/span> seconds"<\/span>)<\/span><\/span>\n        <\/span>return<\/span> res<\/span><\/span>\n<\/span>\n    <\/span>return<\/span> inner<\/span><\/span>\n<\/span>\n<\/span>\n@<\/span>taimr<\/span><\/span>\ndef<\/span> <\/span>count_primes<\/span>(<\/span>max_num<\/span>):<\/span><\/span>\n    count <\/span>=<\/span> <\/span>0<\/span><\/span>\n    <\/span>for<\/span> num <\/span>in<\/span> <\/span>range<\/span>(<\/span>max_num <\/span>*<\/span> <\/span>1000<\/span> <\/span>+<\/span> <\/span>1<\/span>):<\/span><\/span>\n        <\/span>if<\/span> num <\/span>><\/span> <\/span>1<\/span>:<\/span><\/span>\n            <\/span>for<\/span> i <\/span>in<\/span> <\/span>range<\/span>(<\/span>2<\/span>,<\/span> num<\/span>):<\/span><\/span>\n                <\/span>if<\/span> num <\/span>%<\/span> i <\/span>==<\/span> <\/span>0<\/span>:<\/span><\/span>\n                    <\/span>break<\/span><\/span>\n                <\/span>else<\/span>:<\/span><\/span>\n                    count <\/span>+=<\/span> <\/span>1<\/span><\/span>\n    <\/span>return<\/span> count<\/span><\/span>\n<\/span>\n<\/span>\n@<\/span>taimr<\/span><\/span>\ndef<\/span> <\/span>skwer<\/span>(<\/span>n<\/span>):<\/span><\/span>\n    <\/span>return<\/span> n <\/span>**<\/span> <\/span>2<\/span><\/span>\n<\/span>\n<\/span>\nprint<\/span>(<\/span>count_primes<\/span>(<\/span>20<\/span>))<\/span><\/span>\nprint<\/span>(<\/span>skwer<\/span>(<\/span>20<\/span>))<\/span><\/span><\/code><\/pre>Python<\/span><\/div>\n\n\n\n

I have created a decorator to capture time before and after executing a function and print the duration in the above code. I can annotate any function, and it\u2019ll print the duration at every execution.<\/p>\n\n\n\n

As you can see, I wrote a second function \u2014 skwer. Yet, this time I didn\u2019t repeat any time capturing code. Instead, I annotated skwer too.<\/p>\n\n\n\n

Decorators are great time savers. With them, the code now looks tidier. But there\u2019s a caveat with this method to capture execution times.<\/p>\n\n\n\n

<\/circle><\/circle><\/circle><\/g><\/svg><\/span><\/path><\/path><\/svg><\/span>
@<\/span>taimr<\/span><\/span>\ndef<\/span> <\/span>fib<\/span>(<\/span>n<\/span>):<\/span><\/span>\n    <\/span>if<\/span> n <\/span><<\/span> <\/span>2<\/span>:<\/span><\/span>\n        <\/span>return<\/span> n<\/span><\/span>\n    <\/span>return<\/span> <\/span>fib<\/span>(<\/span>n <\/span>-<\/span> <\/span>1<\/span>)<\/span> <\/span>+<\/span> <\/span>fib<\/span>(<\/span>n <\/span>-<\/span> <\/span>2<\/span>)<\/span><\/span>\n<\/span>\n<\/span>\nprint<\/span>(<\/span>fib<\/span>(<\/span>10<\/span>))<\/span><\/span><\/code><\/pre>Python<\/span><\/div>\n\n\n\n

If your script contains a recursive function, one that calls itself, this will be a mess. A workaround I\u2019ve been using for some time is to attach the decorator to a wrapper function.<\/p>\n\n\n\n

Python has some standard libraries to solve these problems conveniently. Two of them that track running duration are \u2018timeit\u2019 and \u2018cProfile.\u2019<\/p>\n\n\n\n

 <\/p>\n\n\n\n

The quickest way to measure execution times.<\/h2>\n\n\n\n

Python standard installation includes timeit \u2014 a convenient way to measure execution time.<\/p>\n\n\n\n

<\/circle><\/circle><\/circle><\/g><\/svg><\/span><\/path><\/path><\/svg><\/span>
import<\/span> timeit<\/span><\/span>\n<\/span>\ndef<\/span> <\/span>fib<\/span>(<\/span>n<\/span>=<\/span>20<\/span>):<\/span><\/span>\n    <\/span>return<\/span> n <\/span>if<\/span> n <\/span><<\/span> <\/span>2<\/span> <\/span>else<\/span> <\/span>fib<\/span>(<\/span>n <\/span>-<\/span> <\/span>1<\/span>)<\/span> <\/span>+<\/span> <\/span>fib<\/span>(<\/span>n <\/span>-<\/span> <\/span>2<\/span>)<\/span><\/span>\n<\/span>\n<\/span>\nprint<\/span>(<\/span>timeit<\/span>.<\/span>timeit<\/span>(<\/span>fib<\/span>,<\/span> <\/span>number<\/span>=<\/span>10<\/span>))<\/span><\/span><\/code><\/pre>Python<\/span><\/div>\n\n\n\n

With timeit, you don\u2019t have to rewrite lines to capture time and do calculations manually. Also, timeit captures the execution of a statement. Hence don\u2019t have to worry about recursive function calls.<\/p>\n\n\n\n

Also, the IPython notebook has a great magic function that prints the running duration of cells. This feature has been super helpful when working in Jupyter Notebooks.<\/p>\n\n\n

\n
\"The
The quickest way to measure execution times.<\/figcaption><\/figure><\/div>\n\n\n

A comprehensive collection of performance statistics.<\/h2>\n\n\n\n

Timeit is a convenient way to collect performance statistics. Yet, it doesn\u2019t go deep and find which parts of your program are the slowest.<\/p>\n\n\n\n

Another standard Python library, cProfile, could do it better.<\/p>\n\n\n\n

<\/circle><\/circle><\/circle><\/g><\/svg><\/span><\/path><\/path><\/svg><\/span>
import<\/span> cProfile<\/span><\/span>\n<\/span>\n...<\/span><\/span>\ndef<\/span> <\/span>fib<\/span>(<\/span>n<\/span>):<\/span><\/span>\n    <\/span>return<\/span> n <\/span>if<\/span> n <\/span><<\/span> <\/span>2<\/span> <\/span>else<\/span> <\/span>fib<\/span>(<\/span>n <\/span>-<\/span> <\/span>1<\/span>)<\/span> <\/span>+<\/span> <\/span>fib<\/span>(<\/span>n <\/span>-<\/span> <\/span>2<\/span>)<\/span><\/span>\n<\/span>\ncProfile<\/span>.<\/span>run<\/span>(<\/span>"<\/span>fib(20)<\/span>"<\/span>)<\/span><\/span><\/code><\/pre>Python<\/span><\/div>\n\n\n\n

Running the above script will give you an illustrative summary of each line.<\/p>\n\n\n

\n
\"A
A comprehensive collection of performance statistics.<\/figcaption><\/figure><\/div>\n\n\n

The Python interpreter ran 21894 functions in six milliseconds to execute the four lines in our script. The interpreter spent most time running line number three, where we defined our Fibonacci function.<\/p>\n\n\n\n

It\u2019s remarkable. In a large-scale application, we\u2019d know where we have bottlenecks with cProfile.<\/p>\n\n\n\n

Executing my application function inside another function and also in a string literal is a discomfort. But cProfile has a more convenient way to do it. It\u2019s your personal preference which one to use.<\/p>\n\n\n\n

<\/circle><\/circle><\/circle><\/g><\/svg><\/span><\/path><\/path><\/svg><\/span>
import<\/span> cProfile<\/span><\/span>\n<\/span>\n...<\/span><\/span>\nwith<\/span> cProfile<\/span>.<\/span>Profile<\/span>()<\/span> <\/span>as<\/span> pr<\/span>:<\/span><\/span>\n    <\/span># Your normal script<\/span><\/span>\n    <\/span>print<\/span>(<\/span>fib<\/span>(<\/span>20<\/span>))<\/span><\/span>\n    <\/span>print<\/span>(<\/span>fib<\/span>(<\/span>25<\/span>))<\/span><\/span>\n    <\/span>print<\/span>(<\/span>fib<\/span>(<\/span>30<\/span>))<\/span><\/span>\n<\/span>\npr<\/span>.<\/span>print_stats<\/span>()<\/span><\/span><\/code><\/pre>Python<\/span><\/div>\n\n\n\n

When auditing with cProfile, I usually prefer the Profile class over the run method. Yes, the run method is very convenient. Yet, I love the Profile class because it doesn\u2019t expect me to run the function inside another. I have the flexibility to do what I need.<\/p>\n\n\n\n

The memory leakage detective.<\/h2>\n\n\n\n

Both timeit and cProfile simplified a crucial problem Python programmers have. Pinpointing where the code spends most of its running time is a hint for further optimization opportunities.<\/p>\n\n\n\n

Yet, running time is hardly the correct measure of an algorithm\u2019s performance. Many other external factors distort the actual execution time. Often the OS controls it rather than the code itself.<\/p>\n\n\n\n

\n

Running time isn\u2019t a measure of performance. It\u2019s only a proxy for resource usage.<\/p>\n<\/blockquote>\n\n\n\n

Because of these external complexities, we can not conclude that a long-running function is indeed a bottleneck.<\/p>\n\n\n\n

Python standard libraries also have a way to estimate resource usage with precision \u2014 Tracemalloc.<\/p>\n\n\n\n

Tracemalloc, which stands for Trace Memory Allocation, is a standard Python library. It allows you to take snapshots of memory usage at different points in your code. Later you can compare one with another.<\/p>\n\n\n\n

Here\u2019s a basic example of tracemalloc.<\/p>\n\n\n\n

<\/circle><\/circle><\/circle><\/g><\/svg><\/span><\/path><\/path><\/svg><\/span>
import<\/span> tracemalloc<\/span><\/span>\n<\/span>\ndef<\/span> <\/span>fib<\/span>(<\/span>n<\/span>):<\/span><\/span>\n    <\/span>return<\/span> n <\/span>if<\/span> n <\/span><<\/span> <\/span>2<\/span> <\/span>else<\/span> <\/span>fib<\/span>(<\/span>n <\/span>-<\/span> <\/span>1<\/span>)<\/span> <\/span>+<\/span> <\/span>fib<\/span>(<\/span>n <\/span>-<\/span> <\/span>2<\/span>)<\/span><\/span>\n<\/span>\ntracemalloc<\/span>.<\/span>start<\/span>()<\/span><\/span>\nfor<\/span> i <\/span>in<\/span> <\/span>range<\/span>(<\/span>25<\/span>,<\/span> <\/span>35<\/span>):<\/span><\/span>\n    <\/span>print<\/span>(<\/span>f<\/span>"<\/span>{<\/span>i<\/span>}<\/span>th fibonacci number is, <\/span>{<\/span>fib<\/span>(<\/span>i<\/span>)<\/span>}<\/span>"<\/span>)<\/span><\/span>\n<\/span>\nsnapshot <\/span>=<\/span> tracemalloc<\/span>.<\/span>take_snapshot<\/span>()<\/span><\/span>\ntop_stats <\/span>=<\/span> snapshot<\/span>.<\/span>statistics<\/span>(<\/span>"<\/span>lineno<\/span>"<\/span>)<\/span><\/span>\n<\/span>\nprint<\/span>(<\/span>"<\/span>---------------------------------------------------------<\/span>"<\/span>)<\/span><\/span>\n[<\/span>print<\/span>(<\/span>stat<\/span>)<\/span> <\/span>for<\/span> stat <\/span>in<\/span> top_stats<\/span>]<\/span><\/span><\/code><\/pre>Python<\/span><\/div>\n\n\n\n

Running the above will output the memory usage of each line. Like cProfile, but with memory instead of running time.<\/p>\n\n\n

\n
\"The
The memory leakage detective.<\/figcaption><\/figure><\/div>\n\n\n

The fourth line of the code was the most significant memory consumer. The interpreter has gone through this line 28 times, and it used 424B of memory every time.<\/p>\n\n\n\n

This amount is small in the example application. But in real-life applications, this will be significant and critical.<\/p>\n\n\n\n

Further, tracemalloc allows comparison between snapshots. With this feature, we can even create a map of memory usage by different components.<\/p>\n\n\n\n

<\/circle><\/circle><\/circle><\/g><\/svg><\/span><\/path><\/path><\/svg><\/span>
tracemalloc<\/span>.<\/span>start<\/span>()<\/span><\/span>\n<\/span>\nsnap1 <\/span>=<\/span> tracemalloc<\/span>.<\/span>take_snapshot<\/span>()<\/span><\/span>\nfib<\/span>(<\/span>40<\/span>)<\/span><\/span>\nsnap2 <\/span>=<\/span> tracemalloc<\/span>.<\/span>take_snapshot<\/span>()<\/span><\/span>\n<\/span>\ntop_stats <\/span>=<\/span> snap2<\/span>.<\/span>compare_to<\/span>(<\/span>snap1<\/span>,<\/span> <\/span>"<\/span>lineno<\/span>"<\/span>)<\/span><\/span>\n<\/span>\nfor<\/span> stat <\/span>in<\/span> top_stats<\/span>:<\/span><\/span>\n    <\/span>print<\/span>(<\/span>stat<\/span>)<\/span><\/span><\/code><\/pre>Python<\/span><\/div>\n\n\n\n

The above code will print how much memory each line consumed and how much the increment was from the last snapshot.<\/p>\n\n\n

\n
\"The
The memory leakage detective.<\/figcaption><\/figure><\/div>\n\n\n

In our code, we calculated the 30th Fibonacci number in line 9 and took our first snapshot. Then we ran the calculation for the 40th Fibonacci and took another. The output says we\u2019ve used 4664B of additional memory and 11 more execution of line number 5.<\/p>\n\n\n\n

Conclusion<\/h2>\n\n\n\n

A critical aspect of successfully running software is an accurate measure of how many resources it uses. This understanding allows engineers to optimize CPU cores and memory to run the application.<\/p>\n\n\n\n

Today, we use Python extensively in many projects. Because of its widespread community and ecosystem, the usage has multi-folded in the recent past.<\/p>\n\n\n\n

This article focused on how to trace back execution times and memory usage in a Python program. Python\u2019s standard libraries allow us to find out these matrices at a line level, even on a multi-module application.<\/p>\n\n\n\n

We discussed three built-in Python libraries to do performance audits. Timeit is the most convenient and has an excellent blend with Jupyter notebooks. cProfile is a comprehensive execution time recorder. Finally, we discussed tracemalloc, which allows us to take memory snapshots at different points and compare them.<\/p>\n\n\n\n

I hope measuring performance in Python is now crystal clear. But how would you make Python run faster? It\u2019s still considered a slow programing language compared to Java and C++. Check out my previous article on boosting the performance of Python scripts.<\/p>\n\n\n\n


\n\n\n\n
\n

Thanks for the read, friend. It seems you and I have lots of common interests. Say Hi to me on LinkedIn<\/strong><\/a>, Twitter<\/strong><\/a>, and Medium<\/strong><\/a>. <\/p>\n\n\n\n

Not a Medium member yet? Please use this link to become a member<\/strong><\/a> because I earn a commission for referring at no extra cost for you.<\/p>\n<\/blockquote>\n","protected":false},"excerpt":{"rendered":"

It\u2019s interesting to see how we improved measuring algorithm performance in python. About a decade ago, when I started coding in python, I stored time into variables at different points in my code.<\/p>\n","protected":false},"author":2,"featured_media":138,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_kad_blocks_custom_css":"","_kad_blocks_head_custom_js":"","_kad_blocks_body_custom_js":"","_kad_blocks_footer_custom_js":"","_kad_post_transparent":"","_kad_post_title":"","_kad_post_layout":"","_kad_post_sidebar_id":"","_kad_post_content_style":"","_kad_post_vertical_padding":"","_kad_post_feature":"","_kad_post_feature_position":"","_kad_post_header":false,"_kad_post_footer":false,"footnotes":""},"categories":[3,5],"tags":[30,27],"taxonomy_info":{"category":[{"value":3,"label":"Python"},{"value":5,"label":"Programming"}],"post_tag":[{"value":30,"label":"optimization"},{"value":27,"label":"python"}]},"featured_image_src_large":["https:\/\/www.the-analytics.club\/wp-content\/uploads\/2023\/06\/how-to-detect-memory-leakage-in-your-python-application.jpg",900,600,false],"author_info":{"display_name":"Thuwarakesh","author_link":"https:\/\/www.the-analytics.club\/author\/thuwarakesh\/"},"comment_info":0,"category_info":[{"term_id":3,"name":"Python","slug":"python","term_group":0,"term_taxonomy_id":3,"taxonomy":"category","description":"","parent":5,"count":52,"filter":"raw","cat_ID":3,"category_count":52,"category_description":"","cat_name":"Python","category_nicename":"python","category_parent":5},{"term_id":5,"name":"Programming","slug":"programming","term_group":0,"term_taxonomy_id":5,"taxonomy":"category","description":"","parent":0,"count":43,"filter":"raw","cat_ID":5,"category_count":43,"category_description":"","cat_name":"Programming","category_nicename":"programming","category_parent":0}],"tag_info":[{"term_id":30,"name":"optimization","slug":"optimization","term_group":0,"term_taxonomy_id":30,"taxonomy":"post_tag","description":"","parent":0,"count":2,"filter":"raw"},{"term_id":27,"name":"python","slug":"python","term_group":0,"term_taxonomy_id":27,"taxonomy":"post_tag","description":"","parent":0,"count":9,"filter":"raw"}],"_links":{"self":[{"href":"https:\/\/www.the-analytics.club\/wp-json\/wp\/v2\/posts\/381"}],"collection":[{"href":"https:\/\/www.the-analytics.club\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.the-analytics.club\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.the-analytics.club\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.the-analytics.club\/wp-json\/wp\/v2\/comments?post=381"}],"version-history":[{"count":5,"href":"https:\/\/www.the-analytics.club\/wp-json\/wp\/v2\/posts\/381\/revisions"}],"predecessor-version":[{"id":1273,"href":"https:\/\/www.the-analytics.club\/wp-json\/wp\/v2\/posts\/381\/revisions\/1273"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.the-analytics.club\/wp-json\/wp\/v2\/media\/138"}],"wp:attachment":[{"href":"https:\/\/www.the-analytics.club\/wp-json\/wp\/v2\/media?parent=381"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.the-analytics.club\/wp-json\/wp\/v2\/categories?post=381"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.the-analytics.club\/wp-json\/wp\/v2\/tags?post=381"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}