Product Tips & Techniques

Tips and Tricks on how to get the most about Maple and MapleSim

There is no released Classic interface to accompany the 64 bit version of Maple (12, 13) for the 64 bit Windows XP64 operating system. Personally, I prefer running the Standard over the Classic interface, although sometimes I miss common subexpression display for lengthy symbolic output.
 
The Maple Classic interface appears to talk to the Maple kernel only over a socket (or similar), and the...

In this post I'll take a closer look at the ways in which Maple code can be thread unsafe. If you have not already seen my post on Thread Safety, consider reading that post first. As a brief review, a procedure is thread safe if it works correctly when run in parallel.

The most obvious way in which procedures can be thread unsafe is if they share data without synchronizing access (using a Mutex, for example). So how can two threads share data?

I am going to wander away from parallel programming in Maple, to talk about GPU programming. However I will show an example of connecting Maple to a CUDA accelerated external library, so that's close enough.  This post is not intended to be a tutorial on CUDA or OpenCL programming, but an introduction to how the technology works.

Download 10597_asymp.mws
View file details

Hi
I am facing problem with maple. I try to get a asymptotic series of a transcendental equation but I get failed again and again. If You people could help me.

I am waiting for ur kind responce.

Thanks

 

I'd like to start by thanking all those readers who left feedback on my last post. It was good hear that most of you enjoy reading my posts and that they are generally helpful. I would like to encourage you to continue posting feedback, especially questions or comments about anything that I fail to explain sufficiently.

The following is a discussion of the limitations of parallel programming in Maple. These are the issues that we are aware of and are hoping to fix in future releases.

I have two sets A={a,b,c} and B={w,x,y,z}. I have to make sets like following

{(w,{a,c}), (y,{a,b,c}), (z,{b})}

 How I can know the total number of such sets from A and B?

 

I noticed that maple's command Transpose can mean two different things:
 

 

ListTools

The Transpose function transposes a list of lists
 

 

LinearAlgebra

The Transpose function computes the transpose of a Matrix, Vector, or scalar.
 

 

 


To highligt this I have selected two examples:

 

 

 

I don't really have anything prepared for today, so I'd like to ask you a few questions about the posts I've made so far.  My goal for this blog was to give typical Maple programmers the information they needed to start trying parallel programming. 

  1. My posts progressed fairly quickly, building up to the the Task Programming Model. Did I move too quickly?  Were there topics that I did not explain well enough or that you felt needed more explaination?
  2. As my goal was to present the Task Programming Model, I skipped a deeper explaination of Threads:-Create style of programming.    Would you like to know more about that type of low level threaded programming?
  3. Most of the examples I used were artifical ones that illustrated the points I was trying to make.  Would you have prefered real world examples instead?
  4. Did reading my posts get you to actually try writing a parallel algorithm?  If yes, did you succeed?  If no, why not?
  5. Was the formatting ok, especially the code?  Each post included an worksheet containing the examples from the post so I did not worry too much about ease of copy and paste.
  6. What else would you like to know about?  I am definitely planning a post on GPU computing, but since it is not really a Maple topic I delayed it till after I am finished with the Maple topics.

Any other feedback you would like to provide would also be appreciated, although I'd like to keep focused on the topics discussed in my blog, and less about Maple in general.

In my previous posts I have discussed various difficulties encountered when writing parallel algorithms. At the end of the last post I concluded that the best way to solve some of these problems is to introduce a higher level programming model. This blog post will discuss the Task Programming Model, the high level parallel programming model introduced in Maple 13.

I have been trying to calculate the Lyapunov exponent of the Rossler oscillator with an intention of finding it at higher precision. When i calculate the same at 16 digit precision on my 64 bit workstation or on my macbook using maple it takes much time (15 hrs have already been passed) and slows down the system badly. Can somebody plz help me out how to make my code more efficient and it runs in such a way that minimum resources of my computer are exploited.
r := abs(z)^(Re(a)) * exp(-Im(a) * argument(z));
w:= r * abs(z)^(Im(a)*I) * (z/abs(z))^Re(a);

I want to see, that z^a = w.

But simplify(w) gives a wrong result, it differs from w:

tstData:= [z=1+3*I, a=-3+I];
z^a; eval(%, tstData):  evalf(%);
'w'; eval(%, tstData): evalf(%);
'simplify(w)'; eval(%, tstData): evalf(%);

tstData:= [z=-2*I, a=+I];
z^a; eval(%, tstData):  evalf(%);
'w'; eval(%, tstData): evalf(%);
'simplify(w)'; eval(%, tstData): evalf(%);

In the last case simplify(w) results in a purely real value,
while w has a nonvanishing imaginary part.

In my previous posts I discussed the basic difference between parallel programming and single threaded programming. I also showed how controlling access to shared variables can be used to solve some of those problems. For this post, I am going to discuss more difficulties of writing good parallel algorithms.

Here are some definitions used in this post:

  • scale: the ability of a program to get faster as more cores are available
  • load balancing: how effectively work is distributed over the available cores
  • coarse grained parallelism: parallelizing routines at a high level
  • fine grained parallelism: parallelizing routines at a low level

Consider the following example

As of 9th of Oct 2009

http://www.mapleprimes.com/mapleranking?sort=desc&order=Points
 

I just want to reiterate how dynamic programming problems can be solved in Maple.

Especially dynamic programming models that frequently appears in economic models.

First of all it is important to note that is close to impossible to find an easy to understand

and step-by-step road maps to dynamic programming. Why is that ?!  The below Maple

code was basically "discovered" by trial and error and pure stubbornness (caveman 101).

 

In the previous post, I described why parallel programming is hard. Now I am going to start describing techniques for writing parallel code that works correctly.

First some definitions.

  • thread safe: code that works correctly even when called in parallel.
  • critical section: an area of code that will not work correctly if run in parallel.
  • shared: a resource that can be accessed by more than one thread.
  • mutex: a programming tool that controls access to a section of code
First 37 38 39 40 41 42 43 Last Page 39 of 65