Assist
in the design, implementation, and optimization of AI models for
distributed systems.
Work
with cloud computing platforms (e.g., AWS, GCP, Azure) to scale AI
workloads.
Collaborate
with the team to integrate AI algorithms into distributed architectures.
Develop
and optimize code for high-performance AI systems and algorithms.
Assist
in the evaluation and testing of AI models in distributed environments.
Work
with large datasets, ensuring efficient data storage and processing across
nodes.
Help
improve AI model performance, scalability, and fault tolerance in
distributed settings.
Contribute
to documentation, code reviews, and research papers (if applicable).
Currently pursuing or recently completed a degree in Computer Science, Information Technology, or a related field.
Basic
understanding of AI/ML algorithms and frameworks (e.g., TensorFlow,
PyTorch, Scikit-learn).
Familiarity
with distributed computing concepts and frameworks (e.g., Spark, Hadoop,
MPI).
Proficiency
in programming languages such as Python, Java, or C++.
Understanding
of cloud platforms and distributed storage systems.
Strong
problem-solving and analytical skills.
Experience
with containerization and orchestration tools like Docker and Kubernetes.
Knowledge
of parallel and distributed computing techniques.
Familiarity
with data parallelism, model parallelism, and distributed training of AI
models.
Experience
with high-performance computing (HPC) clusters or cloud-based AI
environments.
Exposure
to advanced AI topics like reinforcement learning, deep learning, or
natural language processing.
-
Hands-on
experience with cutting-edge distributed AI technologies and projects.
Mentorship
from experienced AI researchers and engineers.
Flexible
working hours and the option to work remotely.
Certificate
of Internship and Letter of Recommendation upon successful completion.
Opportunity for a full-time role based on performance