Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
0.0 - 1.0 years
0 - 3 Lacs
Pune
Work from Office
About the Role: We are looking for a passionate and driven AI/ML Engineer to join our team. This entry-level position is ideal for recent graduates or professionals with up to 1 year of experience in machine learning, data science, or artificial intelligence. Youll get the opportunity to work on real-world AI problems and contribute to projects involving data processing, model development, and deployment. Key Responsibilities: Assist in developing and deploying machine learning models and AI solutions Preprocess, clean, and analyze large datasets from diverse sources Support in building predictive models using supervised and unsupervised learning techniques Work with senior team members on research, model training, and tuning Document processes, models, and results Collaborate with software engineers and domain experts for integration of ML models Required Skills: Basic understanding of Python, NumPy, Pandas, and Scikit-learn Familiarity with machine learning algorithms (Linear Regression, Decision Trees, etc.) Knowledge of data preprocessing techniques and model evaluation Experience with Jupyter Notebooks Strong analytical and problem-solving skills Ability to learn quickly and work in a team Preferred Skills: Exposure to TensorFlow, Keras, or PyTorch Basic knowledge of NLP, Computer Vision, or Reinforcement Learning Understanding of deployment tools (e.g., Flask, FastAPI, Docker) Education: Bachelor’s degree in Computer Science, Data Science, Engineering, or a related field. What We Offer: Exposure to real AI/ML projects Learning and development opportunities A collaborative and innovative work environment Flexible work culture and supportive mentors
Posted 1 week ago
6.0 - 11.0 years
7 - 11 Lacs
Bengaluru
Work from Office
Capco, a Wipro company, is a global technology and management consulting firm. Awarded with Consultancy of the year in the British Bank Award and has been ranked Top 100 Best Companies for Women in India 2022 by Avtar & Seramount. With our presence across 32 cities across globe, we support 100+ clients acrossbanking, financial and Energy sectors. We are recognized for our deep transformation execution and delivery. WHY JOIN CAPCO You will work on engaging projects with the largest international and local banks, insurance companies, payment service providers and other key players in the industry. The projects that will transform the financial services industry. MAKE AN IMPACT Innovative thinking, delivery excellence and thought leadership to help our clients transform their business. Together with our clients and industry partners, we deliver disruptive work that is changing energy and financial services. #BEYOURSELFATWORK Capco has a tolerant, open culture that values diversity, inclusivity, and creativity. CAREER ADVANCEMENT With no forced hierarchy at Capco, everyone has the opportunity to grow as we grow, taking their career into their own hands. DIVERSITY & INCLUSION We believe that diversity of people and perspective gives us a competitive advantage. Location - Bangalore Skills and Qualifications : At least 6+ years' relevant experience would generally be expected to find the skills required for this role. 6+ years of being a practitioner in data engineering or a related field. Strong programming skills in Python, with experience in data manipulation and analysis libraries (e.g., Pandas, NumPy, Dask). Proficiency in SQL and experience with relational databases (e.g., Sybase, DB2, Snowflake, PostgreSQL, SQL Server). Experience with data warehousing concepts and technologies (e.g., dimensional modeling, star schema, data vault modeling, Kimball methodology, Inmon methodology, data lake design). Familiarity with ETL/ELT processes and tools (e.g., Informatica PowerCenter, IBM DataStage, Ab Initio) and open-source frameworks for data transformation (e.g., Apache Spark, Apache Airflow). Experience with message queues and streaming platforms (e.g., Kafka, RabbitMQ). Experience with version control systems (e.g., Git). Experience using Jupyter notebooks for data exploration, analysis, and visualization. Excellent communication and collaboration skills. Ability to work independently and as part of a geographically distributed team. Nice to have Understanding of any cloud-based application development & Dev Ops. Understanding of business intelligence tools - Tableau,PowerBI Understanding of Trade Lifecycle / Financial markets. If you are keen to join us, you will be part of an organization that values your contributions, recognizes your potential, and provides ample opportunities for growth. For more information, visit www.capco.com. Follow us on Twitter, Facebook, LinkedIn, and YouTube.
Posted 1 week ago
5.0 - 10.0 years
5 - 8 Lacs
Bengaluru
Work from Office
Educational Requirements MCA,MSc,Bachelor of Engineering,BBA,BCom Service Line Data & Analytics Unit Responsibilities Roles & Responsibilities: Understand the requirements from the business and translate it into an appropriate technical requirement. Responsible for successful delivery of MLOps solutions and services in client consulting environments; Define key business problems to be solved; formulate high level solution approaches and identify data to solve those problems, develop, analyze/draw conclusions and present to client. Assist clients with operationalization metrics to track performance of ML Models Help team with ML Pipelines from creation to execution Guide team to debug on issues with pipeline failures Understand and take requirements on Operationalization of ML Models from Data Scientist Engage with Business / Stakeholders with status update on progress of development and issue fix Setup Standards related to Coding, Pipelines and Documentation Research on new topics, services and enhancements in Cloud Technologies Additional Responsibilities: EEO/ :Infosys is a global leader in next-generation digital services and consulting. We enable clients in more than 50 countries to navigate their digital transformation. With over four decades of experience in managing the systems and workings of global enterprises, we expertly steer our clients through their digital journey. We do it by enabling the enterprise with an AI-powered core that helps prioritize the execution of change. We also empower the business with agile digital at scale to deliver unprecedented levels of performance and customer delight. Our always-on learning agenda drives their continuous improvement through building and transferring digital skills, expertise, and ideas from our innovation ecosystem.Infosys provides equal employment opportunities to applicants and employees without regard to race; color; sex; gender identity; sexual orientation; religious practices and observances; national origin; pregnancy, childbirth, or related medical conditions; status as a protected veteran or spouse/family member of a protected veteran; or disability. Technical and Professional Requirements: Preferred Qualifications: Experienced in Agile way of working, manage team effort and track through JIRA High Impact client communication Domain experience in Retail, CPG and Logistics Experience in Test Driven Development and experience in using Pytest frameworks, git version control, Rest APIsThe job may entail extensive travel. The job may also entail sitting as well as working at a computer for extended periods of time. Candidates should be able to effectively communicate by telephone, email, and face to face. Preferred Skills: Technology-Machine learning-data science
Posted 1 week ago
3.0 - 5.0 years
6 - 10 Lacs
Hyderabad
Work from Office
Educational Requirements Master Of Comp. Applications,Master Of Science,Master Of Technology,Bachelor of Engineering,Bachelor Of Technology (Integrated) Service Line Application Development and Maintenance Responsibilities A day in the life of an Infoscion As part of the Infosys delivery team, your primary role would be to ensure Development, Validation and Support activities, to assure that our clients are satisfied with the high levels of service in the technology domain. You will gather the requirements and specifications to understand the client requirements in a detailed manner and translate the same into system requirements. You would be a key contributor to building efficient programs/ systems and if you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you! Writing efficient, reusable, testable, and scalable code, Integration of user-oriented elements into different applications, data storage solutions, Keeping abreast with the latest technology and trendsIf you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you! Additional Responsibilities: Knowledge of design principles and fundamentals of architecture Basic understanding of project domain Writing scalable code using Python programming language. Ability to translate functional / nonfunctional requirements to systems requirements Ability to design and code complex programs Ability to write test cases and scenarios based on the specifications Good understanding of agile methodologies Awareness of latest technologies and trends Logical thinking and problem-solving skills along with an ability to collaborate Technical and Professional Requirements: Primary skillsPython Django, Flask, Pandas, Numpy, Pyramid Preferred Skills: Technology-OpenSystem-Python - OpenSystem-Python
Posted 1 week ago
3.0 - 5.0 years
6 - 10 Lacs
Bengaluru
Work from Office
Educational Requirements Master Of Comp. Applications,Master Of Science,Master Of Technology,Bachelor of Engineering,Bachelor Of Technology (Integrated) Service Line Application Development and Maintenance Responsibilities A day in the life of an Infoscion As part of the Infosys delivery team, your primary role would be to ensure Development, Validation and Support activities, to assure that our clients are satisfied with the high levels of service in the technology domain. You will gather the requirements and specifications to understand the client requirements in a detailed manner and translate the same into system requirements. You would be a key contributor to building efficient programs/ systems and if you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you! Writing efficient, reusable, testable, and scalable code, Integration of user-oriented elements into different applications, data storage solutions, Keeping abreast with the latest technology and trendsIf you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you! Additional Responsibilities: Knowledge of design principles and fundamentals of architecture Basic understanding of project domain Writing scalable code using Python programming language. Ability to translate functional / nonfunctional requirements to systems requirements Ability to design and code complex programs Ability to write test cases and scenarios based on the specifications Good understanding of agile methodologies Awareness of latest technologies and trends Logical thinking and problem-solving skills along with an ability to collaborate Technical and Professional Requirements: Primary skillsPython Django, Flask, Pandas, Numpy, Pyramid Preferred Skills: Technology-OpenSystem-Python - OpenSystem-Python
Posted 1 week ago
3.0 - 7.0 years
14 - 18 Lacs
Gurugram
Work from Office
As a Generative AI Solution Architect with IBM Consulting, you are primarily responsible for designing and implementing complex GenAI solutions using IBM WatsonX and Ecosystem Partner Stacks (Microsoft Azure / Open AI, AWS, Aleph Alpha, as well as Open Source). In addition, you are supporting business development, sales as well as the delivery of consulting and system integration projects in Data & AI for our clients. * You have end-to-end technical responsibility during the acquisition, design and delivery of technically complex GenAI projects at scale * You are accountable for the development and productive deployment of scalable Generative AI applications and platforms, particularly within (hybrid) cloud architectures. * You provide consultation and support to clients and colleagues in architecting and selecting the right technology stack for flexible, scalable, and economical GenAI solutions. * You guide and support clients and colleagues in the adoption of development and operational processes for AI solutions, such as Agile DevOps, FinOps, Trustworthy AI and MLOps methodologies. * You stay abreast of the latest developments in the Artificial Intelligence market and research environment, actively participating in knowledge transfer within IBM Consulting, especially when it comes to mentoring junior team members and delivery teams. * You also develop the strategy, vision, and roadmap for GenAI architectures within our consulting business, contributing to both our immediate sales objectives and long-term business growth. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Professional with at least 5-10 years of experience ideally in data and analytics and/or architecture, including 3 years in the design, build, and implementation of AI/Deep Learning & GenAI solutions. * Experienced in architecting AI solutions and managing delivery of highly technical analytics use cases. * Conversant with technical stacks used to support Generative AI use cases (AWS, Google, Microsoft, Watson X). * Familiar with relevant concepts (eg transformer model architectures, prompt engineering, model fine tuning, retrieval augmented generation architectures) and models/technologies (Microsoft Azure / Open AI, AWS, Aleph Alpha, Hugging Face etc as well as Open Source). * Very good at stakeholder management and influencing skills, consultancy skills a very strong plus. * Able to convey complex technical concepts to non-technical stakeholder Preferred technical and professional experience Experienced in architecting AI solutions and managing delivery of highly technical analytics use cases. * Conversant with technical stacks used to support Generative AI use cases (AWS, Google, Microsoft, Watson X). * Familiar with relevant concepts (eg transformer model architectures, prompt engineering, model fine tuning, retrieval augmented generation architectures) and models/technologies (Microsoft Azure / Open AI, AWS, Aleph Alpha, Hugging Face etc as well as Open Source). * Very good at stakeholder management and influencing skills, consultancy skills a very strong plus. * Able to convey complex technical concepts to non-technical stakeholders.* Fluent in English
Posted 1 week ago
2.0 - 5.0 years
14 - 17 Lacs
Hyderabad
Work from Office
As a Data Engineer at IBM, you'll play a vital role in the development, design of application, provide regular support/guidance to project teams on complex coding, issue resolution and execution. Your primary responsibilities include: Lead the design and construction of new solutions using the latest technologies, always looking to add business value and meet user requirements. Strive for continuous improvements by testing the build solution and working under an agile framework. Discover and implement the latest technologies trends to maximize and build creative solutions Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Experience with Apache Spark (PySpark)In-depth knowledge of Spark’s architecture, core APIs, and PySpark for distributed data processing. Big Data TechnologiesFamiliarity with Hadoop, HDFS, Kafka, and other big data tools. Data Engineering Skills: Strong understanding of ETL pipelines, data modeling, and data warehousing concepts. Strong proficiency in PythonExpertise in Python programming with a focus on data processing and manipulation. Data Processing FrameworksKnowledge of data processing libraries such as Pandas, NumPy. SQL ProficiencyExperience writing optimized SQL queries for large-scale data analysis and transformation. Cloud PlatformsExperience working with cloud platforms like AWS, Azure, or GCP, including using cloud storage systems Preferred technical and professional experience Define, drive, and implement an architecture strategy and standards for end-to-end monitoring. Partner with the rest of the technology teams including application development, enterprise architecture, testing services, network engineering, Good to have detection and prevention tools for Company products and Platform and customer-facing
Posted 1 week ago
5.0 - 7.0 years
20 - 30 Lacs
Hyderabad
Work from Office
We are seeking a highly skilled Senior Data Scientist with experience in pricing optimization, pricing elasticity, and AWS SageMaker. The ideal candidate will have a strong foundation in Statistics and Machine Learning, with a particular focus on Bayesian modeling. As part of our Data Science team, you will work closely with clients to develop advanced pricing strategies using state-of-the-art tools and techniques, including AWS SageMaker, to optimize business outcomes. Key Responsibilities: Lead and contribute to the development of pricing optimization models, leveraging statistical and machine learning techniques to inform strategic decisions. Analyze pricing elasticity to predict consumer response to changes in price, helping clients maximize revenue and market share. Implement and deploy machine learning models using AWS SageMaker for scalable and efficient performance in a cloud environment. Utilize Bayesian modeling to support decision-making processes, providing insights into uncertainty and model predictions. Collaborate with cross-functional teams to integrate data-driven insights into business processes. Communicate complex results and findings in a clear and concise manner to both technical and non-technical stakeholders. Continuously explore and experiment with new modeling approaches and tools to improve accuracy and efficiency of pricing solutions. Qualifications Bachelors or Masters degree in Data Science, Statistics Mathematics Economics, or a related field. Advanced degrees preferred. 5+ years of hands-on experience in data science, with a focus on pricing optimization and elasticity modeling. Expertise in Bayesian modeling and machine learning techniques. Proven experience working with AWS SageMaker for model development, deployment, and monitoring. Familiarity with AWS Certified Data Analytics Specialty certification is a plus. Strong programming skills in Python (preferred) or R. Experience with cloud platforms (AWS preferred), including SageMaker. Proficiency in statistical analysis tools and libraries (e.g., NumPy, Pandas, PyMC3, or similar) Excellent problem-solving and analytical thinking skills. Ability to work in a fast-paced environment and manage multiple projects. Strong communication skills with the ability to explain complex concepts to non-technical audiences. Preferred Qualifications : Experience with A/B testing, econometrics, or other statistical experimentation methods. Familiarity with other cloud computing platforms (e.g., Azure, GCP). Experience working in cross-functional teams and client-facing roles. Additional Information Opportunity to work with cutting-edge technology in a dynamic environment. Exposure to a diverse range of industries and projects. Collaborative and inclusive work culture with opportunities for growth and professional development.
Posted 1 week ago
4.0 - 8.0 years
10 - 15 Lacs
Bengaluru
Work from Office
Key Responsibilities: Design and Implement RAG-Based Solutions: Lead the development of a robust RAG-based system that seamlessly integrates generative AI models with retrieval mechanisms, ensuring optimal accuracy, performance, and scalability. Agent-Based Solution Development : Build AI agents that can autonomously perform tasks using information retrieval, language models, and multi-turn interactions using AutoGen, LangGraph frameworks. Generative AI Integration : Leveraging LLMs (e.g., GPT, Anthropic, Llama, Deepseek etc.) for generating high-quality, contextually accurate content in response to user queries. Prompt Engineering: Strong proficiency in prompt engineering to design, test, and optimize prompts for large language models, ensuring effective agent behavior and task completion Data & Knowledge Integration : Work on integrating structured and unstructured data sources into the RAG framework to improve model outputs. Collaborate with Teams : Work closely with product, engineering, and data science teams to ensure seamless integration of AI capabilities into broader products. Security & Compliance : Ensure following best practices in data security, privacy, and compliance, particularly in AI-driven applications. Model Evaluation : Implement strategies for continuous evaluation, monitoring, and tuning of models to ensure high-quality outputs. Data Pipeline Design : Build robust data pipelines for preprocessing and integrating structured and unstructured data sources into the platform. Collaborate with Teams : Work closely with data engineers, and software engineers to ensure seamless integration of machine learning models into production environments. Performance Optimization : Implement and experiment with various techniques such as vector search , semantic search , and knowledge distillation to enhance the platforms efficiency and accuracy. Data Quality : Ensure data quality and consistency throughout the data pipeline, implementing necessary quality checks and validation processes. Requirements: 4+ years of experience with building GenAI solutions using Python and Langchain and AutoGen technologies. Agentic AI experienced developing agents using AutoGen and LangGraph frameworks. Prior experience in building GenAI solutions using effective text representation techniques and classification algorithms. Experience with ML – Model Building, Semantic extraction techniques, data structures and modelling Experience with Supervised/Unsupervised Learning, Sentiment Analysis, Statistical Analysis using NLP and Python libraries (NLTK, Spacy, Numpy, Pandas) Experienced in building Rest API’s Experience with building applications with micro services & serverless architecture Excellent communication skills Good to have: Experience with Azure cloud services Experience with Azure Foundry & Prompt flow
Posted 1 week ago
3.0 - 6.0 years
7 - 11 Lacs
Noida
Work from Office
Clarivate is on the lookout for a Sr. Software Engineer ML (machine learning) to join our Patent Service team in Noida . The successful candidate will be responsible focus on supporting machine learning (ML) projects, for deploying, scaling, and maintaining ML models in production environments, working closely with data scientists, ML engineers, and software developers to architect robust infrastructure, implement automation pipelines, and ensure the reliability and scalability of our ML systems. The ideal candidate should be eager to learn, equipped with strong hands-on technical and analytical thinking skills, have a passion for teamwork, and staying updated with the latest technological trends. About You experience, education, skills, and accomplishments Holding a Bachelor's in Engineering or a Master's degree (BE, ME, B.Tech, M.Tech, MCA, MS) with strong communication and reasoning abilities is required. Proven experience as a Machine Learning Engineer or similar position Deep knowledge of math, probability, statistics and algorithms Outstanding analytical and problem-solving skills Understanding of data structures, data modeling and software architecture Good understanding of ML concepts and frameworks (e.g., TensorFlow, Keras, PyTorch) Proficiency with Python and basic libraries for machine learning such as scikit-learn and pandas Expertise in Prompt engineering . Expertise in visualizing and manipulating big datasets Working experience for managing ML workload in production Implement and/ or practicing MLOps or LLMOps concepts Additionally, it would be advantageous if you have: Experience in Terraform or similar, and IAC in general. Familiarity with AWS Bedrock. Experience with OCR engines and solutions, e.g. AWS Textract, Google Cloud Vision. Interest in exploring and adopting Data Science methodologies, and AI/ML technologies to optimize project outcomes. Experience working in a CI/CD setup with multiple environments, and with an ability to manage code and deployments towards incrementally faster releases. Experience with RDBMS and NoSQL databases, particularly MySQL or PostgreSQL. What will you be doing in this role? Overall, you will play a pivotal role in driving the success of the development projects and achieving business objectives through innovative and efficient agile software development practices. Designing and developing machine learning systems Implementing appropriate ML algorithms, analyzing ML algorithms that could be used to solve a given problem and ranking them by their success probability Running machine learning tests and experiments, perform statistical analysis and fine-tuning using test results, training and retraining systems when necessary Implement monitoring and alerting systems to track the performance and health of ML models in production. Ensure security best practices are followed in the deployment and management of ML systems. Optimize infrastructure for performance, scalability, and cost efficiency. Develop and maintain CI/CD pipelines for automated model training, testing, and deployment. Troubleshoot issues related to infrastructure, deployments, and performance of ML models. Stay up to date with the latest advancements in ML technologies, and evaluate their potential impact on our workflows. About the Team Our team comprises driven professionals who are deeply committed to leveraging technology to make a tangible impact in our field of the patent services area . Joining us, you'll thrive in a multi-region, cross-cultural environment, collaborating on cutting-edge technologies with a strong emphasis on a user-centric approach.
Posted 1 week ago
4.0 - 8.0 years
25 - 40 Lacs
Hyderabad, Gurugram, Bengaluru
Hybrid
Salary : 25 to 40 LPA Exp: 4 to 8 years Location :/Gurugram/Bangalore/Hyderabad Notice: Immediate to 30 days..!! Roles & responsibilities: 3+ years exp on Python , ML and Banking model development Interact with the client to understand their requirements and communicate / brainstorm solutions, model Development: Design, build, and implement credit risk model. Contribute to how analytical approach is structured for specification of analysis Contribute insights from conclusions of analysis that integrate with initial hypothesis and business objective. Independently address complex problems 3+ years exp on ML/Python (predictive modelling) . Design, implement, test, deploy and maintain innovative data and machine learning solutions to accelerate our business. Create experiments and prototype implementations of new learning algorithms and prediction techniques Collaborate with product managers, and stockholders to design and implement software solutions for science problems Use machine learning best practices to ensure a high standard of quality for all of the team deliverables Has experience working on unstructured data ( text ): Text cleaning, TFIDF, text vectorization Hands-on experience with IFRS 9 models and regulations. Data Analysis: Analyze large datasets to identify trends and risk factors, ensuring data quality and integrity. Statistical Analysis: Utilize advanced statistical methods to build robust models, leveraging expertise in R programming. Collaboration: Work closely with data scientists, business analysts, and other stakeholders to align models with business needs. Continuous Improvement: Stay updated with the latest methodologies and tools in credit risk modeling and R programming.
Posted 1 week ago
3.0 - 8.0 years
20 - 35 Lacs
Pune, Gurugram, Bengaluru
Hybrid
Salary : 20 to 35 LPA Exp: 3 to 8 years Location :/Gurugram/Bangalore/Hyderabad Notice: Immediate to 30 days..!! Roles & responsibilities: 3+ years exp on Python , ML and Banking model development Interact with the client to understand their requirements and communicate / brainstorm solutions, model Development: Design, build, and implement credit risk model. Contribute to how analytical approach is structured for specification of analysis Contribute insights from conclusions of analysis that integrate with initial hypothesis and business objective. Independently address complex problems 3+ years exp on ML/Python (predictive modelling) . Design, implement, test, deploy and maintain innovative data and machine learning solutions to accelerate our business. Create experiments and prototype implementations of new learning algorithms and prediction techniques Collaborate with product managers, and stockholders to design and implement software solutions for science problems Use machine learning best practices to ensure a high standard of quality for all of the team deliverables Has experience working on unstructured data ( text ): Text cleaning, TFIDF, text vectorization Hands-on experience with IFRS 9 models and regulations. Data Analysis: Analyze large datasets to identify trends and risk factors, ensuring data quality and integrity. Statistical Analysis: Utilize advanced statistical methods to build robust models, leveraging expertise in R programming. Collaboration: Work closely with data scientists, business analysts, and other stakeholders to align models with business needs. Continuous Improvement: Stay updated with the latest methodologies and tools in credit risk modeling and R programming.
Posted 1 week ago
2.0 - 6.0 years
2 - 6 Lacs
Mumbai, Hyderabad
Work from Office
Objectives Of This Role Design and implement efficient, scalable backend services using Python. Work closely with healthcare domain experts to create innovative and accurate diagnostics solutions. Build APIs, services, and scripts to support data processing pipelines and front-end applications. Automate recurring tasks and ensure robust integration with cloud services. Maintain high standards of software quality and performance using clean coding principles and testing practices. Collaborate within the team to upskill and unblock each other for faster and better outcomes. Primary Skills Python Development Proficient in Python 3 and its ecosystem Frameworks: Flask / Django / FastAPI RESTful API development Understanding of OOPs and SOLID design principles Asynchronous programming (asyncio, aiohttp) Experience with task queues (Celery, RQ) Rust programming experience for systems-level or performance-critical components Testing & Automation Unit Testing: PyTest / unittest Automation tools: Ansible / Terraform (good to have) CI/CD pipelines DevOps & Cloud Docker, Kubernetes (basic knowledge expected) Cloud platforms: AWS / Azure / GCP GIT and GitOps workflows Familiarity with containerized deployment & serverless architecture Bonus Skills Data handling libraries: Pandas / NumPy Experience with scripting: Bash / PowerShell Functional programming concepts Familiarity with front-end integration (REST API usage, JSON handling) Other Skills Innovation and thought leadership Interest in learning new tools, languages, workflows Strong communication and collaboration skills Basic understanding of UI/UX principles Skills:- Rust, Python, Artificial Intelligence (AI), Machine Learning (ML), Data Science, Data Analytics and pandas
Posted 1 week ago
5.0 - 10.0 years
7 - 11 Lacs
Hyderabad
Work from Office
About The Role As a Senior Backend Engineer you will develop reliable, secure, and performant APIs that apply Kenshos AI capabilities to specific customer workflows. You will collaborate with colleagues from Product, Machine Learning, Infrastructure, and Design, as well as with other engineers within Applications. You have a demonstrated capacity for depth, and are comfortable working with a broad range of technologies. Your verbal and written communication is proactive, efficient, and inclusive of your geographically-distributed colleagues. You are a thoughtful, deliberate technologist and share your knowledge generously. Equivalent to Grade 11 Role (Internal) You will: Design, develop, test, document, deploy, maintain, and improve software Manage individual project priorities, deadlines, and deliverables Work with key stakeholders to develop system architectures, API specifications, implementation requirements, and complexity estimates Test assumptions through instrumentation and prototyping Promote ongoing technical development through code reviews, knowledge sharing, and mentorship Optimize Application Scaling: Efficiently scale ML applications to maximize compute resource utilization and meet high customer demand. Address Technical Debt: Proactively identify and propose solutions to reduce technical debt within the tech stack. Enhance User Experiences: Collaborate with Product and Design teams to develop ML-based solutions that enhance user experiences and align with business goals. Ensure API security and data privacy by implementing best practices and compliance measures. Monitor and analyze API performance and reliability, making data-driven decisions to improve system health. Contribute to architectural discussions and decisions, ensuring scalability, maintainability, and performance of the backend systems. Qualifications At least 5+ years of direct experience developing customer-facing APIs within a team Thoughtful and efficient communication skills (both verbal and written) Experience developing RESTful APIs using a variety of tools Experience turning abstract business requirements into concrete technical plans Experience working across many stages of the software development lifecycle Sound reasoning about the behavior and performance of loosely-coupled systems Proficiency with algorithms (including time and space complexity analysis), data structures, and software architecture At least one domain of demonstrable technical depth Familiarity with CI/CD practices and tools to streamline deployment processes. Experience with containerization technologies for application deployment and orchestration. Technologies We Love Python, Django, FastAPI mypy, OpenAPI RabbitMQ, Celery, Distributed messaging system OpenSearch, PostgreSQL, Redis Git, Jsonnet, Jenkins, Containerization technology , Container orchestration platform Airflow, AWS, Terraform Grafana, Prometheus ML Libraries: PyTorch, Scikit-learn, Pandas
Posted 1 week ago
0.0 - 4.0 years
0 Lacs
punjab
On-site
As a Python Intern at ADI Group based in Mohali, you will have the unique opportunity to be a part of a multidisciplinary company with a global presence across four continents. Founded in 1999 and known for its innovative services, ADI Group offers end-to-end solutions in animation, IT development, medical writing, data services, and clinical research support to various industries like real estate, healthcare, education, and biotech. In this role, you will work closely with our development team to gain hands-on experience in software development, data processing, and scripting within a professional environment. Your responsibilities will include assisting in the development, testing, and maintenance of Python applications, collaborating on automation scripts, data processing tasks, and backend development, as well as debugging and troubleshooting issues in existing codebases. To qualify for this position, you should be currently pursuing or have recently completed a degree in Computer Science, Information Technology, Engineering, Data Science, Statistics, or a related technical field. Basic knowledge of Python and its libraries, familiarity with object-oriented programming concepts, and an understanding of version control systems, particularly Git, are essential. Problem-solving skills, a strong willingness to learn, good communication skills, and the ability to work effectively in a team are also required. While exposure to APIs, web development, or database systems and knowledge of the software development life cycle are preferred, they are not mandatory. Experience with cloud platforms like AWS, GCP, or Azure is considered a plus. As a Python Intern at ADI Group, you can look forward to mentorship from experienced developers, the opportunity to work on real-world projects, flexible working hours, a certificate of completion, and potential full-time opportunities based on your performance. This is a full-time unpaid internship for a duration of 6 months with a day shift schedule and the work location is in person. Join us at ADI Group to embark on a rewarding journey of learning and growth in the field of software development and data processing.,
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
haryana
On-site
As a Python Django & JavaScript Developer with 3-4 years of experience, you will be an integral part of the digital team at our company based in Gurgaon. The company has undergone a tech transformation, resulting in the establishment of a separate digital unit that operates in an agile manner to meet the evolving business requirements. Your primary responsibility will involve working on the telesales routing engine, which is integrated with our website and other lead sources. You will collaborate with a small team of 3 developers in a fast-paced environment, engaging regularly with business teams. The team plays a crucial role in driving the growth of our telesales channel through continuous improvements and new developments on the routing engine, based on inputs from analytics and the business team. Additionally, real-time analytics and profiling tools are deployed to enhance lead conversion. Key Requirements: - Hands-on experience of 3-4 years in core Python and Django framework with a good understanding of Javascript. - Proficiency in frameworks, new solutions development, system design, enhancement, and modernization using Python, Django, Postgres, or MySQL. - Fundamental knowledge of front-end technologies such as JS, CSS3, and HTML5. - Desirable familiarity with Angular, React JS, Web frameworks, and RESTful APIs. - Proven project management skills, delivering to project timetables, and providing technical solutions. - Ability to manage multiple tasks within the same assignment effectively and prioritize tasks. - Strong architecture knowledge to build systems supporting business goals, understanding various architectural styles. - Effective interpersonal skills to collaborate within teams, external teams (Marketing, Sales, Product), management, and partners. - Experience with code packaging, release, deployment, and versioning tools like Git, SVN. - Knowledge of enterprise platforms and tools like Apache/NGINX. Roles and Responsibilities: - Writing efficient, reusable, testable, and scalable code. - Analyzing and implementing business needs, feature modification requests, and converting them into software components. - Developing backend components to enhance performance and receptiveness, server-side logic, platform, statistical learning models, and highly responsive web applications. - Designing and implementing high availability and low latency applications, data protection, and security features. - Enhancing the functionalities of current software systems. - Working with Python libraries like Pandas, NumPy, etc. - Mentoring other team members. Required Skills and Experience: - Industry: IT/Computers-Software - Role: Software Engineer - Key Skills: Python, Django, JavaScript, MySQL, React, Angular, CSS, HTML - Education: B.Sc/B.Com/M.Sc/MCA/B.E/B.Tech - Email ID: Resume.Augusta@augustainfotech.com If you meet the above requirements and are interested in being part of a dynamic team driving technological advancements, we encourage you to apply with your resume and other details to Info@augustainfotech.com. Immediate joiners are preferred.,
Posted 1 week ago
0.0 - 4.0 years
0 Lacs
ahmedabad, gujarat
On-site
As a Python Developer specialized in Data Science & Django at I Vision Infotech, you will play a crucial role in developing back-end web applications with a focus on writing clean and efficient Python code. Your responsibilities will include collaborating with cross-functional teams to integrate user-facing elements with server-side logic, designing, developing, and maintaining software applications and databases, and ensuring high performance and responsiveness. You will be expected to work on developing web applications using Django, handling data collection, processing, and visualization using libraries such as Pandas, NumPy, and Matplotlib, building REST APIs using Django REST Framework, and collaborating with the team for real-world project deployments. Additionally, your role will involve working closely with data science teams to implement and optimize data-driven solutions through object-oriented programming and software development. To excel in this role, you should possess a strong knowledge of Python programming, a basic understanding of data science libraries (Pandas, NumPy, Matplotlib, Scikit-learn), hands-on experience with Django or Django REST Framework, and familiarity with HTML, CSS, and JavaScript at a basic level. Knowledge of Git is considered a plus. The ideal candidate will hold a degree in B.E/B.tech/BCA/MCA/BSc/MSc (CS/IT) or equivalent. Joining I Vision Infotech will offer you real-time project exposure, an internship/training certificate, and flexible working hours, providing you with a conducive environment to enhance your skills and grow in your career. If you are passionate about Python and eager to work on cutting-edge projects in the field of Data Science and Django, we look forward to having you on board.,
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
hyderabad, telangana
On-site
At Skillsoft, we propel organizations and individuals towards growth through transformative learning experiences. We firmly believe that every team member possesses the potential to achieve greatness. Join us in our mission to revolutionize learning and assist individuals in unlocking their full potential. As a Sr. Analyst Instructional Designer (Tech Content Strategist) specializing in Data, you will play a pivotal role in developing engaging and effective learning experiences within the Data domain. Your responsibilities will involve leading the entire content development process, from planning and scripting to reviewing and enhancing content for Data. You will need to leverage your instructional design expertise to create learner-centered content that caters to various learner personas and real-life scenarios. Collaboration with visual designers, editors, and technical experts will be essential to ensure that the content is presented in a compelling and accessible format. Your primary responsibilities will include owning the content development lifecycle for Data, designing creative and impactful learning experiences based on instructional design principles, scripting engaging digital content, collaborating with cross-functional teams, aligning content with industry certification frameworks, identifying learning gaps, and utilizing Generative AI tools to enhance content creation. To excel in this role, you should have a minimum of 5+ years of practical experience in data analytics or related fields, proficiency in instructional design, excellent scripting and communication skills, creativity, storytelling ability, programming knowledge (Python and SQL), familiarity with data libraries, and experience with data visualization tools. Additionally, relevant certifications in Data and familiarity with Generative AI tools and instructional design models will be advantageous. Skillsoft is a leading provider of online learning, training, and talent solutions aimed at helping organizations maximize the potential of their workforce. By offering immersive and engaging content, Skillsoft empowers organizations to develop the skills necessary for success in today's competitive landscape. As a partner to numerous global organizations, Skillsoft offers award-winning systems and platforms to support learning, performance, and overall success. If you are intrigued by this opportunity, we encourage you to apply and be a part of our journey towards transforming learning and empowering individuals to succeed. NOTE TO EMPLOYMENT AGENCIES: Skillsoft values the partnerships with our preferred vendors and does not accept unsolicited resumes. Kindly ensure a signed Skillsoft Employment Agency Agreement is on file before submitting any resumes.,
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
chennai, tamil nadu
On-site
You are looking for a skilled and analytical Data Analyst with expertise in data modeling, data analysis, and Python programming. As a Data Analyst, you will be responsible for designing data models, conducting in-depth analysis, and creating automated solutions to facilitate business decision-making and reporting. Your key responsibilities will include designing and implementing conceptual, logical, and physical data models to support analytics and reporting. You will analyze large datasets to uncover trends, patterns, and insights that drive business decisions. Additionally, you will develop and maintain Python scripts for data extraction, transformation, and analysis. Collaboration with data engineers, business analysts, and stakeholders to comprehend data requirements is essential. Creating dashboards, reports, and visualizations to effectively communicate findings will be part of your role. Ensuring data quality, consistency, and integrity across systems, as well as documenting data definitions, models, and analysis processes, will also be key responsibilities. The ideal candidate for this position should have strong experience in data modeling, including ER diagrams, normalization, and dimensional modeling. Proficiency in Python for data analysis using Pandas, NumPy, Matplotlib, etc., is required. A solid understanding of SQL and relational databases is necessary, along with experience in data visualization tools such as Power BI, Tableau, or matplotlib/seaborn. You should be able to translate business requirements into technical solutions and possess excellent analytical, problem-solving, and communication skills. Virtusa values teamwork, quality of life, professional and personal development. Joining Virtusa means becoming part of a global team of 27,000 individuals who are dedicated to your growth. You will have the opportunity to work on exciting projects and leverage state-of-the-art technologies throughout your career with us. At Virtusa, collaboration and a team-oriented environment are paramount, providing great minds with a dynamic space to cultivate new ideas and promote excellence.,
Posted 1 week ago
2.0 - 11.0 years
0 Lacs
maharashtra
On-site
Dear Candidate, Hope you are doing well! CMS Computers Limited (INDIA) is currently looking to fill the position in data Analytics / BI in Mumbai (Bhandup) with a work from office (WFO) setup. The ideal candidate should have 2-3 years of experience in areas such as Investment, real estate, taxation, finance, and accounts. Please note that immediate to 15 days" notice period will be considered. In another opportunity, CMS Computers Limited (INDIA) is hiring for a CA Requirement position in Mumbai (Bhandup) with a work from office (WFO) setup. This role requires a candidate with 8+ years of experience. The candidate should have expertise in data Analytics / BI domain, with specific skills in Python for data Analytics projects, advanced SQL for backend operations, experience in working on various projects across different domains and use cases, good communication skills, and the ability to lead a team effectively. The ideal candidate for the data Analytics / BI role should possess 6 to 11 years of hands-on experience with Python, proficiency in Python data structures and algorithms, familiarity with libraries like Pandas, numpy, requests, and experience in scripting and framework creation using Python. Knowledge of SCM tools like Git, Bitbucket, working with RDBMS using SQL, unit testing frameworks, and performance testing are also required. Strong communication and debugging skills are essential for this role. Key Responsibilities: - Write effective, scalable code - Develop back-end components to enhance responsiveness and performance - Integrate user-facing elements into applications - Test and debug programs - Enhance functionality of existing systems - Implement security and data protection solutions - Assess and prioritize feature requests - Collaborate with internal teams to understand user requirements and provide technical solutions Requirements and skills: - Work experience as a Python Developer - Proficiency in at least one popular Python framework (e.g., Django, Flask, or Pyramid) - Knowledge of object-relational mapping (ORM) - Familiarity with front-end technologies such as React JS, JavaScript, and HTML5 - Team player with good problem-solving skills Desired Skills: - Experience with cloud platforms - Knowledge of Big Data technologies like Pyspark/Spark-Scala - Working knowledge of Kubernetes, dockerization, and Terraform If you are interested in this opportunity, please share your updated resume at chaitali.saha@voqeoit.com. Kindly provide the following details for further processing: - Total Experience - Relevant Experience in - Current CTC - Expected CTC - Notice Period - Negotiable Notice Period - Last WD (if not working) - Job offers in hand Regards, Chaitali Saha Talent Acquisition Group at Voqeoit Technologies Email: chaitali.saha@voqeoit.com Contact no: 8088779710 Website: www.voqeoit.com Thank you.,
Posted 1 week ago
6.0 - 11.0 years
0 Lacs
maharashtra
On-site
As a Python Developer leading the Data Analytics/BI domain with over 8 years of experience, you will play a crucial role in Mumbai (Bhandup) on-site. Your primary responsibilities include leveraging your expertise in Python, specifically in data analytics projects, and showcasing advanced SQL skills for backend development. Your experience in working across multiple projects in various domains and use cases will be invaluable. With a strong emphasis on effective communication, you will drive existing analytics products and manage a team. This role calls for a balance of technical prowess (70%) and management skills (30%). Notably, our Analytic Tool/product is already available on the AWS marketplace. Your role demands 6 to 11 years of hands-on experience in Python, demonstrating the ability to write robust code and a deep understanding of data structures, data transformation, and algorithms. Proficiency in Python libraries such as Pandas, NumPy, and Requests is essential, along with using Python for scripting and framework creation. In addition, familiarity with SCM tools like Git and Bitbucket, RDBMS, unit testing frameworks like pytest, and performance testing will be beneficial. You should possess excellent communication and debugging skills to effectively collaborate with the team. Your tasks will involve writing scalable code, developing back-end components for enhanced responsiveness, integrating user-facing elements, testing and debugging programs, and improving existing systems" functionality. Furthermore, implementing security and data protection solutions, assessing feature requests, and providing technical solutions aligned with user requirements will be part of your routine. Key Requirements and Skills: - Proven work experience as a Python Developer - Expertise in at least one popular Python framework such as Django, Flask, or Pyramid - Knowledge of object-relational mapping (ORM) and familiarity with front-end technologies like React JS, JavaScript, and HTML5 - Strong team spirit and problem-solving skills Desired Skills: - Cloud Experience - Big Data knowledge in Pyspark/Spark-Scala - Working knowledge of Kubernetes, dockerization - Experience with Terraform If you are a result-oriented programmer with a proactive approach, exceptional technical skills, and a passion for leading data analytics projects, we welcome your application to join our dynamic team.,
Posted 1 week ago
6.0 - 10.0 years
0 Lacs
karnataka
On-site
As a high-impact AI/ML Engineer, you will lead the design, development, and deployment of machine learning and AI solutions across vision, audio, and language modalities. You will be an integral part of a fast-paced, outcome-oriented AI & Analytics team, collaborating with data scientists, engineers, and product leaders to translate business use cases into real-time, scalable AI systems. Your responsibilities in this role will include architecting, developing, and deploying ML models for multimodal problems encompassing vision, audio, and NLP tasks. You will be responsible for the complete ML lifecycle, from data ingestion to model development, experimentation, evaluation, deployment, and monitoring. Leveraging transfer learning and self-supervised approaches where appropriate, you will design and implement scalable training pipelines and inference APIs using frameworks like PyTorch or TensorFlow. Collaborating with MLOps, data engineering, and DevOps teams, you will operationalize models using technologies such as Docker, Kubernetes, or serverless infrastructure. Continuously monitoring model performance and implementing retraining workflows to ensure sustained accuracy over time will be a key aspect of your role. You will stay informed about cutting-edge AI research and incorporate innovations such as generative AI, video understanding, and audio embeddings into production systems. Writing clean, well-documented, and reusable code to support agile experimentation and long-term platform development is an essential part of this position. To qualify for this role, you should hold a Bachelor's or Master's degree in Computer Science, Artificial Intelligence, Data Science, or a related field, with a minimum of 5-8 years of experience in AI/ML Engineering, including at least 3 years in applied deep learning. In terms of technical skills, you should be proficient in Python, with knowledge of R or Java being a plus. Additionally, you should have expertise in ML/DL Frameworks like PyTorch, TensorFlow, and Scikit-learn, as well as experience in Computer Vision tasks such as image classification, object detection, OCR, segmentation, and tracking. Familiarity with Audio AI tasks like speech recognition, sound classification, and audio embedding models is also desirable. Strong capabilities in Data Engineering using tools like Pandas, NumPy, SQL, and preprocessing pipelines for structured and unstructured data are required. Knowledge of NLP/LLMs, Cloud & MLOps services, deployment & infrastructure technologies, and CI/CD & Version Control tools are also beneficial. Soft skills and competencies that will be valuable in this role include strong analytical and systems thinking, effective communication skills to convey models and results to non-technical stakeholders, the ability to work cross-functionally with various teams, and a demonstrated bias for action, rapid experimentation, and iterative delivery of impact.,
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
karnataka
On-site
At EY, you'll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we're counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY's Financial Services Office (FSO) is a unique, industry-focused business unit that provides a broad range of integrated services leveraging deep industry experience with strong functional capability and product knowledge. The FSO practice offers integrated advisory services to financial institutions and other capital markets participants, including commercial banks, investment banks, broker-dealers, asset managers, insurance and energy trading companies, and Corporate Treasury functions of leading Fortune 500 Companies. The service offerings include market, credit, and operational risk management, regulatory advisory, quantitative advisory, technology enablement, and more. Within EY's FSO Advisory Practice, the Financial Services Risk Management (FSRM) group provides solutions to help clients identify, measure, manage, and monitor market, credit, operational, and regulatory risks associated with trading, asset-liability management, and capital markets activities. The Credit Risk (CR) team within FSRM assists clients in designing and implementing strategic and functional changes across risk management within banking book portfolios of large domestic and global financial institutions. Key Responsibilities: - Demonstrate deep technical capabilities and industry knowledge of financial products, particularly lending products. - Stay informed about market trends and demands in the financial services sector and issues faced by clients. - Monitor progress, manage risk, and communicate effectively with key stakeholders. - Mentor junior consultants and review tasks completed by them. - Work on projects involving model audits, validation, and development activities. Qualifications, Certifications, and Education: Must-have: - Postgraduate degree in accounting, finance, economics, statistics, or related field with at least 3 years of related work experience. - Understanding of climate risk models, ECL, stress testing, and regulatory requirements related to credit risk. - Knowledge of Credit Risk and Risk Analytics techniques. - Hands-on experience in data preparation, manipulation, and consolidation. - Strong documentation skills and ability to summarize key details effectively. - Proficiency in statistics, econometrics, and technical skills in Advanced Python, SAS, SQL, R, and Excel. Good-to-have: - Certifications such as FRM, CFA, PRM, SCR. - Experience in Data/Business Intelligence Reporting and knowledge of Machine Learning models. - Willingness to travel and previous project management experience. EY exists to build a better working world, creating long-term value for clients, people, and society while building trust in capital markets. EY teams across 150 countries provide trust through assurance and help clients grow, transform, and operate in various sectors like assurance, consulting, law, strategy, tax, and transactions, addressing complex issues globally.,
Posted 1 week ago
6.0 - 10.0 years
0 Lacs
haryana
On-site
You will be responsible for preparing data, developing models, testing them, and deploying them. This includes designing machine learning systems and self-running artificial intelligence (AI) software to automate predictive models. Your role will involve ensuring that algorithms generate accurate user recommendations. Additionally, you will work on turning unstructured data into useful information by auto-tagging images and converting text to speech. Solving complex problems with multi-layered data sets and optimizing existing machine learning libraries and frameworks will be part of your daily tasks. Your responsibilities will also include developing machine learning algorithms to analyze large volumes of historical data for making predictions. You will run tests, perform statistical analysis, and interpret the results, documenting machine learning processes. As a Lead Engineer in ML and Data Engineering, you will oversee the technologies, tools, and techniques used within the team. Collaboration with the team based on business requirements for designing the requirements is essential. You will ensure that development standards, policies, and procedures are adhered to and drive change to implement efficient and effective strategies. Working closely with peers in the business to fully understand the business process and requirements is crucial. Maintenance, debugging, and problem-solving will also be part of your job responsibilities. Ensuring that all software developed within your team meets the business requirements specified and showing flexibility to respond to the changing needs of the business are key aspects of the role. Your technical skills should include 4+ years of experience in Python, API development using Flask/Django, and proficiency in libraries such as Pandas, Numpy, Keras, Scipy, Scikit-learn, PyTorch, Tensor Flow, and Theano. Hands-on experience in Machine Learning (Supervised & Unsupervised) and familiarity with Data Analytics Tools & Libraries are required. Experience in Cloud Data Pipelines and Engineering (Azure/AWS) as well as familiarity with ETL Pipelines/DataBricks/Apache NiFi/Kafka/Talend will be beneficial. Ability to work independently on projects, good written and verbal communication skills, and a Bachelor's Degree in Computer Science/Engineering/BCA/MCA are essential qualifications for this role. Desirable skills include 2+ years of experience in Java.,
Posted 1 week ago
6.0 - 12.0 years
0 Lacs
maharashtra
On-site
Automation Anywhere is a leader in AI-powered process automation, utilizing AI technologies to drive productivity and innovation across organizations. The company's Automation Success Platform offers a comprehensive suite of solutions including process discovery, RPA, end-to-end process orchestration, document processing, and analytics, all with a security and governance-first approach. By empowering organizations globally, Automation Anywhere aims to unleash productivity gains, drive innovation, enhance customer service, and accelerate business growth. Guided by the vision to enable the future of work through AI-powered automation, the company is committed to unleashing human potential. Learn more at www.automationanywhere.com. Qualifications: - Bachelor's or Master's degree in Computer Science, Engineering, or a related field. - 6 to 12 years of relevant experience. - Proven track record as a Solution Architect or Lead, focusing on integrating Generative AI or exposure to Machine Learning. - Expertise in at least one RPA tool such as Automation Anywhere, UiPath, Blue Prism, Power Automate, and proficiency in programming languages like Python or Java. Skills: - Proficiency in Python or Java for programming and architecture. - Strong analytical and problem-solving skills to translate business requirements into technical solutions. - Experience with statistical packages and machine learning libraries (e.g., R, Python scikit-learn, Spark MLlib). - Familiarity with RDBMS, NoSQL, and Cloud Platforms like AWS/AZURE/GCP. - Knowledge of ethical considerations and data privacy principles related to Generative AI for responsible integration within RPA solutions. - Experience in process analysis, technical documentation, and workflow diagramming. - Designing and implementing scalable, optimized, and secure automation solutions for enterprise-level AI applications. - Expertise in Generative AI technologies such as RAG, LLM, and AI Agent. - Advanced Python programming skills with specialization in Deep Learning frameworks, ML libraries, NLP libraries, and LLM frameworks. Responsibilities: - Lead the design and architecture of complex RPA solutions incorporating Generative AI technologies. - Collaborate with stakeholders to align automation strategies with organizational goals. - Develop high-level and detailed solution designs meeting scalability, reliability, and security standards. - Take technical ownership of end-to-end engagements and mentor a team of senior developers. - Assess the applicability of Generative AI algorithms to optimize automation outcomes. - Stay updated on emerging technologies, particularly in Generative AI, to evaluate their impact on RPA strategies. - Demonstrate adaptability, flexibility, and willingness to work from client locations or office environments as needed. Kindly note that all unsolicited resumes submitted to any @automationanywhere.com email address will not be eligible for an agency fee.,
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough