Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
0 years
0 Lacs
Gurgaon, Haryana, India
On-site
Key Skills Needed Model Development: Designing, implementing, and fine-tuning generative models . Experience in building LLM models and deployment in production. Natural Language Processing (NLP): Strong understanding of NLP concepts and techniques to manage documents and natural language inputs. Full stack Programming Skills: Proficiency in programming languages commonly used in AI and machine learning, such as Python and libraries like TensorFlow or PyTorch, Machine Learning and AI: Knowledge of machine learning concepts and AI algorithms can be advantageous, especially when dealing with more advanced chatbot capabilities and personalization. Data Preprocessing: Handling and preprocessing large datasets for training generative models, ensuring data quality and appropriate formatting. Problem-Solving Skills: The ability to analyze complex conversational scenarios and design appropriate AIML-based solutions to address user queries effectively. Communication Skills: Effective communication is crucial for collaborating with other team members and stakeholders, as well as explaining technical concepts to non-technical individuals. Collaboration: Working closely with cross-functional teams, including data scientists, engineers, and domain experts, to integrate generative models into real-world applications Creativity and Innovation: The ability to think creatively and produce innovative solutions to enhance the chatbot's performance and user experience.
Posted 9 hours ago
5.0 years
0 Lacs
Gurgaon, Haryana, India
On-site
EXL Decision Analytics EXL (NASDAQ:EXLS) is a leading operations management and analytics company that helps businesses enhance growth and profitability in the face of relentless competition and continuous disruption. Using our proprietary, award-winning Business EXLerator Framework™, which integrates analytics, automation, benchmarking, BPO, consulting, industry best practices and technology platforms, we look deeper to help companies improve global operations, enhance data-driven insights, increase customer satisfaction, and manage risk and compliance. EXL serves the insurance, healthcare, banking and financial services, utilities, travel, transportation and logistics industries. Headquartered in New York, New York, EXL has more than 24,000 professionals in locations throughout the United States, Europe, Asia (primarily India and Philippines), Latin America, Australia and South Africa. EXL Analytics provides data-driven, action-oriented solutions to business problems through statistical data mining, cutting edge analytics techniques and a consultative approach. Leveraging proprietary methodology and best-of-breed technology, EXL Analytics takes an industry-specific approach to transform our clients' decision making and embed analytics more deeply into their business processes. Our global footprint of nearly 2,000 data scientists and analysts assist client organizations with complex risk minimization methods, advanced marketing, pricing and CRM strategies, internal cost analysis, and cost and resource optimization within the organization. EXL Analytics serves the insurance, healthcare, banking, capital markets, utilities, retail and e-commerce, travel, transportation and logistics industries. Please visit www.exlservice.com for more information about EXL Analytics. Job Overview We are looking for a skilled Data Engineer with strong expertise in Python, Databricks, PySpark, Plotly Dash, Data Analysis, SQL, and Query Optimization. The ideal candidate will be responsible for developing scalable data pipelines, performing complex data analysis, and building interactive dashboards to support business decision-making. Key Responsibilities Design, develop, and maintain scalable and efficient data pipelines using PySpark and Databricks. Perform data extraction, transformation, and loading (ETL) from diverse structured and unstructured data sources. Write and optimize complex SQL queries for high performance and scalability across large datasets. Build and maintain interactive dashboards and data visualizations using Plotly Dash or similar frameworks. Collaborate closely with data scientists, analysts, and business stakeholders to gather and understand data requirements. Ensure data quality, consistency, and integrity throughout the data lifecycle using validation and monitoring techniques. Develop and maintain modular, reusable, and well-documented code and technical documentation for data workflows and processes. Implement data governance, security, and compliance best practices. Candidate Profile 5+ years of relevant experience in Data Engineering tools Programming Languages: Python and SQL Python Frameworks: Plotly Dash, Flask, Fast API Data Processing Tools: pandas, NumPy, PySpark Cloud Platforms: Databricks (for scalable computing resources) Version Control & Collaboration: Git, GitHub, GitLab Deployment and Monitoring: Databricks ,Docker, Kubernetes What We Offer EXL Analytics offers an exciting, fast-paced and innovative environment, which brings together a group of sharp and entrepreneurial professionals who are eager to influence business decisions. From your very first day, you get an opportunity to work closely with highly experienced, world-class analytics consultants. You can expect to learn many aspects of businesses that our clients engage in. You will also learn effective teamwork and time-management skills—key aspects for personal and professional growth. Analytics requires different skill sets at different levels within the organization. At EXL Analytics, we invest heavily in training you in all aspects of analytics as well as in leading analytical tools and techniques. We provide guidance/coaching to every employee through our mentoring program wherein every junior level employee is assigned a senior level professional as advisor. Sky is the limit for our team members. The unique experiences gathered at EXL Analytics set the stage for further growth and development in our company and beyond.
Posted 9 hours ago
5.0 years
0 Lacs
Gurugram, Haryana, India
On-site
We are seeking an AI Engineer with 4–5 years of experience, specializing in Azure AI and Microsoft 365 Copilot. The ideal candidate should have strong expertise in Machine Learning, Natural Language Processing, and Generative AI, along with practical experience in building, deploying, and managing AI solutions in the Azure ecosystem. A solid background in MLOps and LLMOps using Azure DevOps is essential. Key Responsibilities: Design, develop, and deploy AI solutions leveraging Azure AI and M365 Copilot and cognitive AI capabilities. Implement and optimize ML algorithms, NLP models, and Generative AI applications Collaborate with cross-functional teams to integrate AI into enterprise workflows and applications. 4–5 years of hands-on experience in AI/ML development and deployment. Proven expertise with Azure AI services, Microsoft 365 Copilot, and Cognitive Services. Proficiency in Azure Machine Learning, Azure OpenAI, and related services. Practical experience with MLOps and LLMOps workflows in Azure DevOps. Programming proficiency in Python (preferred) and relevant AI/ML libraries. Familiarity with Azure cloud architecture, security, and compliance.
Posted 9 hours ago
6.0 - 10.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
🚀 Calling All Data Enthusiasts in and Around Chennai! 🚀 Are you a Machine Learning Engineer , Data Scientist , or Data Analyst ready to showcase your skills and solve real-world challenges? HCLTech is hosting an exclusive, invite-only Hackathon at our Navalur Campus on 23rd August 2025 , and we’re scouting top talent to compete, collaborate, and connect! 🔍 What’s in it for you? Tackle exciting problems aligned with our latest job openings Work with cutting-edge tools: Python, TensorFlow, PyTorch, SQL, Tableau, AWS, and more Network with HCLTech leaders and fellow innovators Walk away with the job offer with the HCL Tech. 💼 Who should apply If you are hands-on in the below skills with 6 to 10 years of experience. Machine Learning & Deep Learning Data Analysis & Visualization Cloud Platforms (AWS, Azure, GCP) Big Data & MLOps …this is your moment to shine! 📍 Venue: HCLTech Navalur Campus, Chennai 📅 Date: 23rd August 2025 🔒 Invite-Only: Kindly apply to help us review your profile. Selected candidates will receive a personal invitation to participate. 👉 Spots are limited! Apply now and take the first step toward joining one of the world’s leading tech companies. ------------------------------------------------------------------------------------------------------------ Job Description: We are seeking a skilled Machine Learning Engineer , Data Scientist , or Data Analyst to design, develop, and deploy machine learning models, conduct deep data analysis, and generate actionable insights. The ideal candidate will have experience in data preprocessing, feature engineering, model development, and performance optimization, working with large datasets and leveraging advanced machine learning frameworks. Key Responsibilities: Data Preparation & Analysis : Gather, clean, and preprocess structured, semi-structured, and unstructured data from various sources. Conduct exploratory data analysis (EDA) to identify trends, patterns, and outliers. Apply data wrangling techniques using Pandas , NumPy , and SQL to transform raw data into usable formats. Use statistical analysis to drive data-driven decision-making. Machine Learning Model Development : Build, train, and fine-tune machine learning models using Scikit-learn , TensorFlow , Keras , or PyTorch . Develop predictive models, classification algorithms, clustering models, and recommendation systems. Conduct hyperparameter optimization using techniques like grid search or random search. Model Evaluation & Optimization : Evaluate model performance using metrics such as Accuracy , Precision , Recall , F1-Score , AUC-ROC , Confusion Matrix , and Cross-validation . Improve model performance through techniques such as feature engineering, data augmentation, and regularization. Deploy models into production environments, and monitor performance for continual improvement. Data Visualization & Reporting : Develop dashboards and reports using Tableau , Power BI , Matplotlib , Seaborn , or Plotly . Present findings through clear visualizations and actionable insights to non-technical stakeholders. Write detailed reports on data analysis and machine learning results, ensuring transparency and reproducibility. Collaboration & Stakeholder Communication : Work closely with cross-functional teams (e.g., engineering, product, business) to define data-driven solutions. Communicate technical concepts clearly to non-technical stakeholders and provide insights that influence product and business strategy. Data Pipeline & Automation : Design and implement scalable data pipelines for model training and deployment using Airflow , Apache Kafka , or Celery . Automate data collection, preprocessing, and feature extraction tasks. Research & Continuous Learning : Stay up-to-date with the latest trends in machine learning, deep learning, and data science methodologies. Explore new tools, techniques, and frameworks to improve model accuracy and efficiency. Required Skills: Programming Languages : Strong proficiency in Python , with experience in SQL . Machine Learning : Hands-on experience with Scikit-learn , TensorFlow , Keras , PyTorch , or similar ML libraries. Data Analysis : Strong skills in Pandas , NumPy , and Matplotlib for data manipulation and analysis. Statistical Analysis : Experience applying statistical methods to data, including hypothesis testing and regression analysis. Cloud Platforms : Familiarity with AWS , Azure , or Google Cloud for deploying models and using cloud-native data services (e.g., AWS Sagemaker , Azure ML ). Data Visualization : Experience using Tableau , Power BI , Matplotlib , Seaborn , or Plotly for creating visualizations. SQL & Databases : Proficiency in SQL for querying relational databases and working with NoSQL databases (e.g., MongoDB , BigQuery ). Version Control : Experience using Git for version control. Desirable Skills: Big Data Technologies : Familiarity with tools like Apache Hadoop , Spark , Dask , or Google BigQuery for processing large datasets. Deep Learning : Experience with deep learning frameworks such as TensorFlow , PyTorch , or MXNet . NLP & Computer Vision : Experience with natural language processing (NLP) using spaCy , NLTK , or transformers , and computer vision using OpenCV or TensorFlow . MLOps : Familiarity with MLOps tools like Kubeflow , MLflow , or DVC for managing model workflows. Data Engineering : Experience with ETL tools like Apache Airflow , Talend , AWS Glue , or Google Dataflow for data pipeline automation. Tools & Technologies: Machine Learning : Scikit-learn , TensorFlow , PyTorch , Keras , XGBoost . Data Analysis : Pandas , NumPy , Matplotlib , Seaborn , Plotly . Cloud Platforms : AWS , Google Cloud , Azure . Databases : MySQL , PostgreSQL , MongoDB , BigQuery , Snowflake . Data Visualization : Tableau , Power BI , Matplotlib , Seaborn , Plotly . Version Control : Git .
Posted 9 hours ago
0.0 years
0 - 0 Lacs
Birajnagar, West Bengal
On-site
About Us: REPL World is a leading provider of educational Robotics and STEAM solutions. We make learning fun and practical through hands-on training in Robotics and Coding. Our mission is to help students think creatively and develop real-world skills. We work with schools to shape the innovators of tomorrow—today. Job Summary: We are seeking a passionate and skilled Robotics & Coding Teacher to inspire and educate students in the fields of coding, robotics, and technology. This role is ideal for someone who enjoys working with young learners, can adapt to varied learning styles, and is excited to share their knowledge in a hands-on, engaging manner. Key Responsibilities: Prepare and deliver interactive coding and robotics lessons/demonstrations to students. Adapt teaching methods and materials to suit different learning styles and abilities. Guide and support students in completing coding exercises, robotics projects, and problem-solving activities. Assess student progress and provide constructive feedback to enhance their skills. Stay updated with the latest trends, tools, and advancements in coding, robotics, and technology education. Collaborate with other educators and staff to enrich the overall learning experience. Assist in planning and organizing coding events, competitions, and hackathons. Foster a positive, inclusive, and encouraging learning environment for all students. Requirements: Bachelor’s / Master’s degree in Computer Science, IT. Prior teaching or training experience in coding/robotics preferred. Proficiency in programming languages such as Python, Scratch, C++, or Java (as applicable). Familiarity with educational robotics platforms (e.g., LEGO Mindstorms, Arduino, Boffin, etc.). Strong communication, problem-solving, and classroom management skills. Job Type: Full-time Pay: ₹15,000.00 - ₹18,000.00 per month Education: Bachelor's (Preferred) Language: English (Preferred) Location: Birajnagar, West Bengal (Preferred) Work Location: In person
Posted 9 hours ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Title: ML Ops Engineer / ML Engineer Experience - 5Yrs -10 Yrs Location - Chennai Job Overview: We are looking for an experienced MLOps Engineer to help deploy, scale, and manage machine learning models in production environments. You will work closely with data scientists and engineering teams to automate the machine learning lifecycle, optimize model performance, and ensure smooth integration with data pipelines. Key Responsibilities: Transform prototypes into production-grade models Assist in building and maintaining machine learning pipelines and infrastructure across cloud platforms such as AWS, Azure, and GCP. Develop REST APIs or FastAPI services for model serving, enabling real-time predictions and integration with other applications. Collaborate with data scientists to design and develop drift detection and accuracy measurements for live models deployed. Collaborate with data governance and technical teams to ensure compliance with engineering standards. Maintain models in production Collaborate with data scientists and engineers to deploy, monitor, update, and manage models in production. Manage the full CI/CD cycle for live models, including testing and deployment. Develop logging, alerting, and mitigation strategies for handling model errors and optimize performance. Troubleshoot and resolve issues related to ML model deployment and performance. Support both batch and real-time integrations for model inference, ensuring models are accessible through APIs or scheduled batch jobs, depending on use case. Contribute to AI platform and engineering practices Contribute to the development and maintenance of the AI infrastructure, ensuring the models are scalable, secure, and optimized for performance. Collaborate with the team to establish best practices for model deployment, version control, monitoring, and continuous integration/continuous deployment (CI/CD). Drive the adoption of modern AI/ML engineering practices and help enhance the team’s MLOps capabilities. Develop and maintain Flask or FastAPI-based microservices for serving models and managing model APIs. Minimum Required Skills: Bachelor's degree in computer science, analytics, mathematics, statistics. Strong experience in Python, SQL, Pyspark. Solid understanding and knowledge of containerization technologies (Docker, Podman, Kubernetes). Proficient in CI/CD pipelines, model monitoring, and MLOps platforms (e.g., AWS SageMaker, Azure ML, MLFlow). Proficiency in cloud platforms, specifically AWS, Azure and GCP. Familiarity with ML frameworks such as TensorFlow, PyTorch, Scikit-learn. Familiarity with batch processing integration for large-scale data pipelines. Experience with serving models using FastAPI, Flask, or similar frameworks for real-time inference. Certifications in AWS, Azure or ML technologies are a plus. Experience with Databricks is highly valued. Strong problem-solving and analytical skills. Ability to work in a team-oriented, collaborative environment. Tools and Technologies: Model Development & Tracking: TensorFlow, PyTorch, scikit-learn, MLflow, Weights & Biases Model Packaging & Serving: Docker, Kubernetes, FastAPI, Flask, ONNX, TorchScript CI/CD & Pipelines: GitHub Actions, GitLab CI, Jenkins, ZenML, Kubeflow Pipelines, Metaflow Infrastructure & Orchestration: Terraform, Ansible, Apache Airflow, Prefect Cloud & Deployment: AWS, GCP, Azure, Serverless (Lambda, Cloud Functions) Monitoring & Logging: Prometheus, Grafana, ELK Stack, WhyLabs, Evidently AI, Arize Testing & Validation: Pytest, unittest, Pydantic, Great Expectations Feature Store & Data Handling: Feast, Tecton, Hopsworks, Pandas, Spark, Dask Message Brokers & Data Streams: Kafka, Redis Streams Vector DB & LLM Integrations (optional): Pinecone, FAISS, Weaviate, LangChain, LlamaIndex, PromptLayer
Posted 9 hours ago
5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Location : Pune Experience : 4–5 Years Shift : Night Shift (US/Global Support Coverage) Job Type : Full-time About the Role: We are seeking a technically skilled and customer-focused Technical Support Engineer to join our team. This role involves monitoring systems, resolving incidents, and supporting end users and internal teams during off-hours. Key Responsibilities: Provide Tier 1/2 technical support during night shifts Investigate issues using Kibana or similar logging tools to analyze application logs and trace errors. Write and execute SQL queries to support data troubleshooting and reporting. Monitor infrastructure and applications; respond to system alerts and anomalies. Document incidents, resolutions, and standard operating procedures clearly and concisely. Work closely with global teams to escalate and track critical issues. Required Skills & Qualifications: 4–5 years of experience in a technical support role. Hands-on experience with log analysis tools like Kibana, Loggly, or Splunk. Proficient in SQL for data lookup and issue analysis. Basic understanding of cloud infrastructure and services. Basic networking knowledge; should be familiar with concepts such as NAT, VPN, firewalls, and routing. Experience with scripting languages such as Python or Shell scripting is a plus. Strong communication, problem-solving, and time management skills. Comfortable working independently on night shifts. Preferred Qualifications: Exposure to Linux/Unix environments and command-line tools. Experience using ticketing systems like Jira, Zendesk, or ServiceNow. Familiarity with REST APIs and testing tools like Postman. Prior experience in 24/7 support or night shift roles. What We Offer: Competitive compensation with night shift allowance. A collaborative, globally distributed support team. Clear growth paths within Technical Support Ongoing training and skill development in tools, infrastructure, and troubleshooting.
Posted 9 hours ago
5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Our client is prominent Indian multinational corporation specializing in information technology (IT), consulting, and business process services and its headquartered in Bengaluru with revenues of gross revenue of ₹222.1 billion with global work force of 234,054 and listed in NASDAQ and it operates in over 60 countries and serves clients across various industries, including financial services, healthcare, manufacturing, retail, and telecommunications. The company consolidated its cloud, data, analytics, AI, and related businesses under the tech services business line. Major delivery centers in India, including cities like Chennai, Pune, Hyderabad, and Bengaluru, kochi, kolkatta, Noida. Job Title: Datadog · Location: Pune, Bangalore(Hybrid) · Experience: 5+ yrs · Job Type : Contract to hire. · Notice Period:- Immediate joiners. Mandatory Skills: 5+ years of Hands-on experience with Datadog’s stack in multi-cloud or hybrid cloud environments. Strong background in systems engineering or software development. Experience with Kubernetes and cloud platforms (AWS, GCP, Azure). sTRONG Proficiency in basic Programming & Scripting languages like Go, Python, or Java. Familiarity with monitoring, alerting, and incident response practices. Deep understanding of cloud-native architectures and microservices. Experience with high-throughput, low-latency systems. Strong communication skills. Experience with CI/CD pipelines and monitoring tools. Deep understanding of Windows and Linux systems, networking, and operating system internals. Experience with distributed systems and high-availability architectures. Strong experience with Docker, Kubernetes and service mesh technologies. Tools like Terraform, Ansible, or Pulumi (Optional) if present would be an extra advantage Building dashboards, Monitors, and Alert Setup systems. Familiarity with Jenkins, GitHub Actions, CircleCI, or similar. Automating deployments, rollbacks, and testing pipelines.
Posted 9 hours ago
6.0 years
0 Lacs
Pune, Maharashtra, India
On-site
```html About the Company ACLDigital is committed to innovation and excellence in the field of embedded software development. Our mission is to deliver high-quality solutions that empower our clients and enhance their operational efficiency. We foster a culture of collaboration, creativity, and continuous learning. About the Role We are seeking a skilled Embedded Software Developer with a strong background in C++ and RTOS application development. The ideal candidate will have a passion for technology and a desire to work on cutting-edge projects. Responsibilities Embedded Software development experience of 6+ years with C++ Hands on experience on RTOS (like Zephyr) based application development in C/C++ and GTest for Unit Testing Knowledge of data acquisition module and board bring-up, good debugging skills Exposure to working on hardware peripherals Knowledge of Git, Jira, Confluence Knowledge of python, shell script Qualifications 6+ years of experience in Embedded Software development Required Skills Embedded Software development experience of 6+ years with C++ Hands on experience on RTOS (like Zephyr) based application development in C/C++ and GTest for Unit Testing Knowledge of data acquisition module and board bring-up, good debugging skills Exposure to working on hardware peripherals Knowledge of Git, Jira, Confluence Knowledge of python, shell script Preferred Skills Working experience on uboot, Embedded Linux and other opensource components, RTOS is a plus Experience on communication interfaces as I2C, SPI, RS232/485, USB Understanding of any Industrial protocols like Ethernet, Modbus, REST is preferable Good hands on experience in MQTT, HTTP, BLE, Wi-Fi and Webserver Pay range and compensation package Notice Period: 0-15 days only Equal Opportunity Statement We are an equal opportunity employer and are committed to fostering a diverse and inclusive workplace. We encourage applications from all qualified individuals. Please share your cv on jigneshkumar.s@acldigital.com ```
Posted 9 hours ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Notice Period : 0-15 days only. REQUIRED SKILLS : (80-90%) 1. 3 or more years Embedded Software Development 2. Experience with formal software development process (such as Agile) 3. Knowledge of Embedded Software Development Tools like VSCode, C2000 SDK, make utilities (Cmake), mem map configurations, etc. 4. C/C++ should have hands-on experience 5. DevOps Tools - Github, Git configurations for automation of Pre & post hooks 6. Experience developing in a Unix/Linux environment (Yocto) 7. RTOS & Linux basic knowledge 8. Passion for software DESIRED SKILLS : (10-20%) 1. Github actions -> Yaml 2. Bash & Python scripting 3. Experience with Github Cookie cutter 4. Experience with Docker containers - Need to have working knowledge 5. Knowledgeable of theory and use of Test-Driven Development (Gtest) 6. Visual Studio code extensions and plugin creation 7. Basic Understanding of REST APIs 8. Basics on cyber security Please share your cv on jigneshkumar.s@acldigital.com
Posted 9 hours ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Join us as a Senior Business Analyst at Barclays, where you'll take part in the evolution of our digital landscape, driving innovation and excellence. You'll harness cutting-edge technology to revolutionize our digital offerings, ensuring unparalleled customer experiences. As a part of the team, you will deliver technology stack, using strong analytical and problem solving skills to understand the business requirements and deliver quality solutions. You'll be working on complex technical problems that will involve detailed analytical skills and analysis. This will be done in conjunction with fellow engineers, business analysts and business stakeholders. To be successful as a Senior Business Analyst you should have experience with: Minimum Bachelors or equivalent degree from a recognized university. Years experience of Business analysis (requirements gathering and validation, specification development, data analysis, test design and execution) using structured methods or a recognized methodology. Experience with full Systems Development Lifecycle. Financial markets product knowledge which includes knowledge of front-to-back trade flows processing and a broad range of investment banking products (Equities, Fixed Income cash and Derivatives) and trade lifecycles. Quick learner, strong analytical and problem-solving skills and should possess excellent written and verbal communication skills. Excellent ability to communicate effectively with Business & IT development teams. Ability to validate business requirements and develop functional specifications - . Ability to work closely with technical teams to design both technical and procedural solutions. Very good technical understanding of Python, Unix, SQL server, Oracle PL/SQL & Sybase. Strong understanding of data relationships. Exposure to conducting impact assessments and gap analysis, data mappings. Ability to create and maintain technical documentation such as Functional Specs, Data Flow Diagrams, Presentations and Spreadsheets is needed. Should be skilled in using MS Office (Word, Excel, Visio, PowerPoint). Some Other Highly Valued Skills Include Experience of working with the Compliance/Risk function within an Investment Bank. Familiarity with Surveillance applications (Actimize, SMARTS, Tradinghub, Trackwizz etc). Ability to manage a small team independently. Background in project management would be considered a plus. Must be independent and creative in approach to problems and issues; assertive and proactive. You may be assessed on key critical skills relevant for success in role, such as risk and controls, change and transformation, business acumen, strategic thinking and digital and technology, as well as job-specific technical skills. This role is based in Pune. Purpose of the role To support the organisation, achieve its strategic objectives by the identification of business requirements and solutions that address business problems and opportunities. Accountabilities Identification and analysis of business problems and client requirements that require change within the organisation. Development of business requirements that will address business problems and opportunities. Collaboration with stakeholders to ensure that proposed solutions meet their needs and expectations. Support the creation of business cases that justify investment in proposed solutions. Conduct feasibility studies to determine the viability of proposed solutions. Support the creation of reports on project progress to ensure proposed solutions are delivered on time and within budget. Creation of operational design and process design to ensure that proposed solutions are delivered within the agreed scope. Support to change management activities, including development of a traceability matrix to ensure proposed solutions are successfully implemented and embedded in the organisation. Assistant Vice President Expectations To advise and influence decision making, contribute to policy development and take responsibility for operational effectiveness. Collaborate closely with other functions/ business divisions. Lead a team performing complex tasks, using well developed professional knowledge and skills to deliver on work that impacts the whole business function. Set objectives and coach employees in pursuit of those objectives, appraisal of performance relative to objectives and determination of reward outcomes If the position has leadership responsibilities, People Leaders are expected to demonstrate a clear set of leadership behaviours to create an environment for colleagues to thrive and deliver to a consistently excellent standard. The four LEAD behaviours are: L – Listen and be authentic, E – Energise and inspire, A – Align across the enterprise, D – Develop others. OR for an individual contributor, they will lead collaborative assignments and guide team members through structured assignments, identify the need for the inclusion of other areas of specialisation to complete assignments. They will identify new directions for assignments and/ or projects, identifying a combination of cross functional methodologies or practices to meet required outcomes. Consult on complex issues; providing advice to People Leaders to support the resolution of escalated issues. Identify ways to mitigate risk and developing new policies/procedures in support of the control and governance agenda. Take ownership for managing risk and strengthening controls in relation to the work done. Perform work that is closely related to that of other areas, which requires understanding of how areas coordinate and contribute to the achievement of the objectives of the organisation sub-function. Collaborate with other areas of work, for business aligned support areas to keep up to speed with business activity and the business strategy. Engage in complex analysis of data from multiple sources of information, internal and external sources such as procedures and practises (in other areas, teams, companies, etc).to solve problems creatively and effectively. Communicate complex information. 'Complex' information could include sensitive information or information that is difficult to communicate because of its content or its audience. Influence or convince stakeholders to achieve outcomes. All colleagues will be expected to demonstrate the Barclays Values of Respect, Integrity, Service, Excellence and Stewardship – our moral compass, helping us do what we believe is right. They will also be expected to demonstrate the Barclays Mindset – to Empower, Challenge and Drive – the operating manual for how we behave.
Posted 9 hours ago
5.0 - 12.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Position: Palantir Developer Location: Any LTIMindtree location Exp: 5 to 12 years Notice: Immediate Joiner Must Know SQL Python Spark Hadoop Python SPARK TypeScript Optimize performance of data transformations and analytical queries Collaborate with crossfunctional teams to understand business requirements and translate them into data solutions Great to know Design and implement scalable data pipelines and workflows using Palantir Foundry Working knowledge on modules like contour quiver vertex and app development Develop and maintain data models ontology layers and operational dashboards Ensure data quality integrity and governance across all Palantir based solutions Stay updated with Palantir platform updates and best practices Interested candidates can send their resume to sujatha.getari@ltimindtree.com
Posted 9 hours ago
8.0 - 10.0 years
0 Lacs
Mohali district, India
On-site
Experience required- 8-10 years Alkye is seeking a Senior Python Developer to join our dynamic team in Mohali. As a Senior Python Developer, you will play a crucial role in developing robust web applications, integrating front-end elements, maintaining software frameworks, and ensuring the creation of resuable and adapatable codebase. About Us We are a dynamic and innovative Digital Services company, specializing in cutting-edge technology solutions across web and app development, UIUX, AI-driven platforms, digital marketing, and data science. We enable you to bring out your best. Our company culture is focused on collaboration and a shared passion to deliver the best digital products. We offer a highly collaborative, caring team environment with a strong focus on learning and development, recognition for your individual and team contributions. Responsibilities: Develop and maintain web applications using flask, Django framework. Develop APIs using Django rest framework. Integrate front-end elements into applications to enhance user experience. Create and maintain various software frameworks to support application development. Write reusable and adaptable code to optimize application performance and scalability. Collaborate with the team to incorporate PyCharm as the preferred IDE for development projects. Qualifications: Bachelor's or Master's degree in Computer Science or relevant fields such as Engineering or Mathematics. Minimum of 10 years of experience as a backend developer with solid experience in Python Frameworks with a strong emphasis on Django Framework. Experience working with SQL databases such as MySQL, PostgreSQL, etc. Knowledge of AWS, Git, and Docker is considered a plus. Experience with PyTorch and ML frameworks for machine learning tasks. Good understanding of software engineering principles, frameworks, and design patterns. Ability to work independently with attention to detail and quality. Strong collaboration skills with the ability to communicate effectively with both technical and non-technical stakeholders. Excellent analytical skills and problem-solving abilities. Demonstrated commitment to staying updated with the latest trends, technologies, and programming languages. If you are passionate about Python development, possess the required skills and experience, and thrive in a collaborative team environment, we encourage you to apply for this exciting opportunity at Alkye . What We Offer: Our commitment to being a remarkable workplace, offering meaningful employment where you can contribute to shared values. Delivering Memorable Moments. Joining Alkye India comes with a range of perks; Competitive packages, with a Monthly Bonus scheme for outstanding teamwork. Tailored development opportunities for everyone at all levels and all roles. The opportunity to join a fast-growing global company located in EMEA and APAC. Modern, spacious global offices, with opportunities to travel for top performers
Posted 9 hours ago
10.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
We are seeking a passionate and experienced AI Lead to drive the development and deployment of cutting-edge AI/ML solutions across the organization. As an AI Lead, you will work closely with business stakeholders, data scientists, engineers, and product teams to design scalable, ethical, and impactful AI models. You will also mentor junior members, guide architectural decisions, and lead AI initiatives from concept to production. Experience in BFSI is Mandatory Key Responsibilities: 1. AI Strategy & Leadership Define and execute the AI roadmap aligned with business objectives. Identify and evaluate AI use cases across business units. Present solution architectures, POCs, and business impact assessments to leadership. 2. Model Development & Deployment Lead the design, development, and deployment of ML/AI models (classification, NLP, computer vision, recommender systems, etc.). Ensure production-grade model performance, latency, accuracy, and interpretability. Integrate AI models into existing systems using MLOps best practices. 3. Technical Expertise Architect end-to-end ML pipelines: data ingestion, preprocessing, model training, evaluation, and monitoring. Apply modern AI frameworks (e.g., TensorFlow, PyTorch, Hugging Face Transformers, LangChain, etc.). Utilize cloud platforms (AWS/GCP/Azure) for scalable AI deployments. 4. Team Management & Mentorship Mentor and guide junior data scientists and engineers. Conduct code reviews, knowledge sharing, and promote AI best practices. Build a strong AI culture within the team. 5. Stakeholder Engagement Collaborate with product, engineering, and business teams to translate requirements into technical solutions. Communicate AI concepts and ROI to non-technical stakeholders. Required Skills & Qualifications: Bachelor’s or Master’s in Computer Science, AI/ML, Data Science, or related field. 8–10 years of experience in data science, machine learning, or AI-focused roles. Proven experience leading AI projects from design to deployment. Proficient in Python, SQL, ML libraries (Scikit-learn, XGBoost, LightGBM), and deep learning frameworks (TensorFlow/PyTorch). Hands-on with GenAI, LLMs, or vector databases is a strong plus. Experience with Docker, Kubernetes, MLflow, Airflow, or similar tools. Solid understanding of data governance, AI ethics, and model interpretability. Preferred Skills (Good to Have): Exposure to LLMs, Prompt Engineering, or Agentic AI Frameworks. Experience in BFSI, fintech, healthcare, or manufacturing domains. Knowledge of MLOps tools like Kubeflow, SageMaker, or Vertex AI.
Posted 9 hours ago
0 years
0 Lacs
Navi Mumbai, Maharashtra, India
On-site
Company Description VocSkill is an ed-tech platform founded by IIT alumni and renowned educationalists, certified by the National Skill Development Corporation. Our mission is to empower learners by creating an ecosystem that makes them job-ready. We offer industry-relevant courses curated by professionals to meet market needs, with a proven track record of successful training and skill development. Our certificate programs include E-Commerce, Business Analytics, Digital Marketing, Fintech & Blockchain, HR Analytics, Corporate Banking, and Design Thinking. Role Description We are seeking a passionate Data Science, ML, Blockchain & Data Analytics Trainer to join our team in Navi Mumbai. This is a full-time, on-site position. The trainer will conduct lab-based training sessions for Machine Learning & Blockchain as well as Data Warehousing & Mining , develop curriculum-aligned content, and provide hands-on guidance to students. You will be responsible for: Delivering engaging lab sessions covering supervised/unsupervised ML algorithms, neural networks, ensemble methods, blockchain fundamentals, smart contracts, and consensus algorithms. Teaching data warehousing concepts including OLAP operations, dimensional modelling, and data preprocessing. Guiding students in implementing classification, clustering, association rule mining, and web mining algorithms using tools like Python, R, and WEKA. Designing assignments, projects, and assessments that align with university syllabus and industry standards. Mentoring students towards applying concepts in real-world applications and industry projects. Staying updated on emerging trends in Data Science, Blockchain, and Data Analytics. Qualifications Strong knowledge in Machine Learning, Blockchain, Data Warehousing, and Data Mining . Proficiency in Python, R, SQL , and relevant ML/Blockchain tools (TensorFlow, PyTorch, Scikit-Learn, Solidity, etc.). Familiarity with dimensional modelling, OLAP operations, classification, clustering, association rule mining . Hands-on experience with smart contract development and blockchain frameworks. Excellent teaching, communication, and mentoring skills. Minimum Bachelor’s/Master’s degree in Computer Science, Data Science, Engineering, or related fields. Prior teaching/training experience in academic or corporate settings preferred.
Posted 9 hours ago
7.0 years
0 Lacs
Mumbai Metropolitan Region
On-site
Job Title – Integration Specialist (IBM ACE/IIB) Total Years of Experience – 7+ Years Relevant Years of Experience – 7 Years Mandatory Skills – IBM App Connect Enterprise (ACE) / IBM Integration Bus (IIB) ESQL, MQ, REST/SOAP APIs Kubernetes / OpenShift (for CP4I) DevOps tools – Jenkins, Ansible, Terraform Job Description – Nice to Have Skills – Banking domain experience Design, deploy, and manage integration solutions using IBM App Connect Enterprise (ACE) to connect applications, APIs, and data across hybrid cloud environments. Develop and deploy integration flows using ESQL, Java, REST/SOAP. Administer IBM ACE including installation, configuration, and monitoring. Deploy ACE solutions on-premises, on cloud platforms (IBM Cloud, AWS, Azure), or on Cloud Pak for Integration (CP4I). Implement security standards (TLS, OAuth) and optimize performance. Automate deployments using CI/CD tools (Jenkins, GitHub Actions) and scripting (Bash, Python). Troubleshoot integration issues and provide technical support. Preferred Certifications – IBM Certified Developer/Admin – ACE Red Hat OpenShift (for CP4I) Location – Mumbai (Only)
Posted 9 hours ago
3.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Your Role As a Gensler Technical Designer, you will tap into your boundless creativity to contribute towards the design of unique environments, providing exemplar design knowledge from conception to completion of projects working across all design stages. What You Will Do Participate in all project phases, including programming, conceptual designs, presentations, schematic design, design development, construction documents and production Collaborate with design team, clients, consultants, contractors, fabricators and other vendors to meet overall project objectives Produce graphic presentations, 3D renderings, plans, elevations, details and sections through to detailed construction drawings Utilize hand rendering techniques to communicate design concepts Support project sustainability targets throughout project phases and actively engage in delivering them Study sustainable design strategies in every project stage and investigate solutions to sustainable design challenges Work collaboratively with the team to optimize sustainability performance through design iterations and research Engage in climate focused analysis and studies for each project Your Qualifications Bachelor’s degree in architecture/ interiors or equivalent 3-5+ years of relevant architecture and design experience, with a strong background in the design and delivery of multiple building typologies of varying scale. Excellent analytical and problem-solving skills Outstanding presentation and written and verbal communication skills Creative, original thinking and technically biased, demonstrated through a strong creative and technical portfolio Ability to work well under pressure and meet deadlines efficiently Proficiency in modelling 2D/3D software, such as Revit, Octane, 3dViz, 3d’s MAX and/or Rhino utilizing V-ray Proficient in Adobe Creative Suite (Illustrator, Photoshop, InDesign) and/or Sketch Up LEED, WELL, Fitwel, LFA or other rating systems accreditations preferable Demonstrate a collaborative and integrated approach towards achieving high sustainability project goals Motivated to grow knowledge and experience in sustainability on a personal and a team level Your Design Technology Qualifications Essential: Basic understanding and familiarity with Autodesk Revit for modelling and documentation Desirable: Basic understanding and familiarity in Rhinoceros for design authoring Basic understanding and familiarity interoperability workflows between various design tools such as AutoCad, Revit, Rhino, etc Basic understanding and familiarity with real time rendering processes, and material creation & management within the context of integrated BIM and parametric workflows Application we work with: Design Authoring – Revit, Rhino Collaboration – BIM 360 Computational design – Grasshopper, Dynamo Building Performance Simulation – Insight, Sefaira, Diva, Ladybug tools Visualisation – Vray, Enscape, Twinmotion, 3DSMax Graphics & Productivity – Adobe Creative Suite, Microsoft Office Suite Experiential – Unreal Engine, Unity Development – C+, Python To be considered, please submit portfolio and/or work samples in PDF format. Life at Gensler As a people-first organization, we are as committed to enjoying life as we are to delivering best-in-class design. From internal design competitions to research grants to “Well-being Week,” our offices reflect our people’s diverse interests. We encourage every person at Gensler to lead a healthy and balanced life. Our comprehensive benefits include medical, dental, vision, disability and wellness programs. We also offer profit sharing and twice annual bonus opportunities. As part of the firm’s commitment to licensure and professional development, Gensler offers reimbursement for certain professional licenses and associated renewals and exam fees. In addition, we reimburse tuition for certain eligible programs or classes. We view our professional development programs as strategic investments in our future.
Posted 9 hours ago
3.0 years
0 Lacs
Bengaluru, Karnataka, India
Remote
About Assured is transforming the infrastructure of U.S. healthcare using intelligent automation. We’re building an AI-native system of action for provider operations to automate the most painful parts of healthcare - credentialing, licensing, and payer enrollment. These are slow, error-prone processes that cost the healthcare system billions and delay patient care. We’re backed by top Silicon Valley investors and trusted by the most innovative provider groups and health systems. This is a rare opportunity to join an elite team reimagining one of the most broken parts of healthcare - using cutting-edge ML in the real world, at scale. The Role: Data Scientist We’re looking for a full-stack Data Scientist to join us as our first dedicated data science hire. You'll partner with our AI/ML engineers and product/engineering teams to build, deploy, and scale machine learning solutions that automate key pieces of the healthcare provider lifecycle. This role is ideal for someone who thrives in early-stage environments, enjoys owning things end to end, and wants their work to have a measurable impact on an industry that desperately needs modern infrastructure. What You’ll Do ML Innovation & Research Lead the design, prototyping, and deployment of models across document processing, LLM-based automation, risk prediction, and compliance inference Apply foundation models, deep learning, and generative AI to healthcare operational data, working on real problems. Designing retrieval + LLM pipelines to interpret ambiguous state license rules and payer policy text. Scaling intelligent document intake across 100+ formats using foundation models and structured rules Collaborate closely with engineering and product to take models from concept to production Healthcare Data Integration & Insight Develop and manage data pipelines using structured and semi-structured data (e.g., provider rosters, credentialing forms, payer rules, licensing board data) Analyze large-scale customer data to derive insights that guide product decisions and customer strategy Use operational and compliance data to surface anomalies, inefficiencies, and automation opportunities Stakeholder-Facing & Thought Leadership Interface directly with customers and internal stakeholders to understand use cases and shape the right ML approach Share learnings via internal memos, external blogs, or whitepapers to grow Assured’s ML thought leadership Champion practices around reproducibility, model governance, and continuous learning Team-Building & Mentorship Mentor engineers and future data science hires; help shape the team’s technical direction Establish baseline tooling and processes for experimentation, deployment, and monitoring of ML solutions Work closely with leadership to align ML strategy with business objectives What We’re Looking For Must-Haves 3-5+ years of experience building and shipping ML or deep learning models in production Strong Python skills and fluency with ML libraries (e.g., PyTorch, TensorFlow, Hugging Face) Deep understanding of machine learning algorithms, NLP, and modern data processing workflows Ability to design experiments, evaluate models rigorously, and iterate fast Comfortable working autonomously in ambiguous, fast-changing environments Excellent written and verbal communication for technical and non-technical audiences Preferred Graduate degree (MS/PhD) in a quantitative field (e.g., CS, Statistics, Physics, Applied Math) Experience working with healthcare, insurance, or compliance data Familiarity with AWS/GCP and production ML workflows (CI/CD, model monitoring, etc) Experience with LLMs, GenAI, and tools like LangChain, vector databases, or Retrieval-Augmented Generation Publications, blog posts, or open-source contributions in ML or AI You’ll Love This Role If You Want to lead ML projects from idea to deployment Thrive in a 0-to-1 environment and like building from scratch Care about real-world impact, especially in healthcare Enjoy building systems—not just training models Believe great ML products come from close collaboration with product, engineering, and users Why Join Assured High-impact work - Tackle bottlenecks that slow down provider access to patients Real-world AI - Work on meaningful applications of LLMs and applied ML in compliance, forms, automation, and document intelligence Cross-functional exposure - Collaborate with customers, clinical ops, engineers, and founders Early-stage upside - Equity, early influence, and a high-growth trajectory People-first culture - Remote flexibility, mental health time, and a focus on outcomes, not hours
Posted 9 hours ago
8.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Hello Connections!!! Greetings from ElevarSoC We are hiring for Linux Performance Developer with 8+ Years of Experience #Hiring for Linux Performance Developer with 8+ Years of Experience #Hyderabad# Below the jd · Analyze, measure, and optimize system performance across the full Linux stack—kernel, drivers, user-space services, and applications. · Profile CPU, memory, I/O, GPU, and power usage to identify performance bottlenecks and inefficiencies. · Develop and deploy performance monitoring and tracing tools (e.g., perf, ftrace, eBPF, systemtap, trace-cmd, BPFtrace). · Work closely with kernel, power, graphics, boot, and user-space teams to tune and enhance system responsiveness and throughput. · 5+ years of experience in Linux performance analysis and tuning on embedded or consumer platforms. · Deep knowledge of Linux internals: process scheduling, memory management, NUMA, file systems, block devices, I/O stack, etc. Share your resume to rukshana.khatoon@elevarsoc.com #Hyderabadjobs #jobs #software #Linux Peformance Develover #Python #kernel #GPU / CPU #Embedded #linux #Debugging #joinourteam
Posted 9 hours ago
4.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
· 4+ years of advanced working knowledge of SQL, Python, and PySpark AWS Exposure Python & Pyspark-Candidate should be equally good-Coding is important
Posted 9 hours ago
15.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
🚀 Job Opening: Application Tech Support Practitioner - Python 📍 Location: Ahmedabad 🎓 Qualification: 15 years full-time education 🧑💻 Experience: Minimum 3 years in Python 🔧 Role Summary: Be the vital link between clients and applications! Use your Python skills and communication expertise to keep systems running smoothly. Troubleshoot issues, guide clients, and contribute to a high-performing support team. 💼 Responsibilities: Provide top-notch app support & issue resolution Engage with clients confidently & clearly Become an SME & support junior team members Improve support practices & collaborate with your team 🛠️ Must-Have Skills: Strong Python programming Problem-solving & analytical thinking Excellent communication skills Team player mindset ✨ Good to Have: Knowledge of Network Administration (NA) Interested? Apply now at priyanshi.r@logicplanet.com
Posted 9 hours ago
0 years
0 Lacs
India
Remote
Internship: RevOps/Sales Automation Engineer Intern Company: Hubcredo Location: Remote (India) Duration: 3 Months Stipend: ₹15,000 – ₹20,000/month Start Date: Immediate Please watch this video by our current employee to understand your day to day responsibilities: https://www.loom.com/share/0e624a6dacc347c686ff9b40d8e883c2?sid=66935823-60f6-41ac-8d4b-299b9320fa2f Note: Only recent graduates who can work full time from Monday to Friday are eligible for the role About Hubcredo Hubcredo is a B2B lead generation and sales acceleration agency that powers GTM growth for global startups. We use AI-driven systems, smart data workflows, and multi-channel outreach to help companies scale faster. Our stack includes tools like Clay, Apollo, Instantly, LinkedIn automations, and no-code/low-code solutions like n8n and Zapier. What You'll Do As a RevOps Intern, you'll help build the technical foundation for modern GTM operations using AI, automation, and systems thinking. Key Responsibilities: Set up and manage CRM systems such as HubSpot, Pipedrive, or Zoho Automate sales and marketing workflows using n8n, Zapier, and Make Integrate tools like Apollo, Clay, Instantly, and LinkedIn via APIs and webhooks Build smart data pipelines for lead enrichment and scoring using AI tools Clean and transform data using Clay, Google Sheets, or Python scripts Create dashboards and reports to track revenue, conversion, and outreach metrics Document RevOps processes and suggest technical improvements Skills and Requirements Familiarity with AI or no-code automation tools like n8n, Zapier, or Make Experience with CRM tools such as HubSpot, Pipedrive, or Zoho Understanding of sales and marketing data and GTM workflows Bonus: Knowledge of APIs, webhooks, or basic scripting (Python or JavaScript) Comfort with tools like Google Sheets, Notion, or Airtable Problem-solving mindset with strong attention to detail You’ll Thrive If You Enjoy building automation workflows and solving operational bottlenecks Have explored tools like n8n, Clay, ChatGPT, or custom data bots Are curious about the intersection of RevOps, AI, and revenue growth Want to work in a fast-paced, results-oriented remote team Who Can Apply Recent graduates Able to commit full-time for 3 months Excited to build real-world systems that drive business impact
Posted 9 hours ago
0 years
0 Lacs
India
Remote
Agent-Based Segmentation Expertise: Experience with agent-based segmentation solutions, especially Cisco Secure Workload (CSW). Alternative Tool Experience: If CSW experience is rare, strong background in similar tools like Illumio or Akamai Guardicore is acceptable. Architect/SME Level: Ability to act as an architect and subject matter expert, not just a hands-on engineer. Hands-On Implementation: Practical, hands-on experience with micro-segmentation projects, ideally having led or significantly contributed to such deployments. Stakeholder Communication: Strong skills in communicating technical concepts to internal teams and stakeholders, including managing concerns and leading them through the segmentation journey. Pragmatic Approach: Ability to deliver practical, risk-reducing segmentation rather than aiming for exhaustive segmentation, with a focus on what is achievable and valuable. Documentation: Capable of producing high-quality, auditable documentation for regulatory and external review. Standardization and Simplification: Preference for candidates who can deliver repeatable, standardized solutions rather than complex, one-off configurations. Deployment Scale: Experience with deployments of varying sizes (hundreds to thousands of workloads) is valued. Programming / Scripting / Network Automation – Further to an SME skillset, it’s expected that you will bring some level of programming, scripting or automation experience. Examples of toolset experience expected here includes Python, CI/CD Pipelines, Terraform, Ansible, PowerShell, etc.
Posted 10 hours ago
8.0 - 10.0 years
0 Lacs
India
On-site
Role: Control-M Senior Developer responsible for developing Control-M end to end workflows and managing the Control-M environment hosted as a Software as a Service (SaaS) solution, ensuring the smooth and efficient operation of automated workflows and batch processing for the organization. Key responsibilities, skills, and qualifications: ∙ Migration from Automic to Control-M will be added advantage ∙ Experience Required: 8-10 Years ∙ Must have good experience in Control-M job development ∙ In-depth knowledge of Control-M architecture and functionality ∙ Experience working in Multi ERP application and integration in CONTROL-M ∙ Experience with scripting languages (e.g.,UC4/JCL, Shell, Python, Perl) for automation and job management. ∙ Install, configure, and maintain Control-M software and related components within the SaaS environment. ∙ Manage and optimize Control-M components, including Control-M/Server, Control-M/Agent, and Control-M/EM. ∙ Design, create, and manage job workflows, schedules, and dependencies using Control-M. ∙ Monitor system performance, perform regular health checks, and ensure the stability and security of the Control-M infrastructure. ∙ Troubleshoot and resolve issues related to Control-M and its integrations with other enterprise systems (databases, applications, cloud services). ∙ Collaborate with business users to ensure Control-M jobs meet business requirements. ∙ Develop and implement job scheduling best practices to improve efficiency and reliability. ∙ Create and maintain documentation for job schedules, workflows, and processes, including operational documentation, job schedules, system configurations, credentials, and SOPs. ∙ Provide support and guidance to users regarding Control-M functionalities and best practices, potentially conducting training sessions. ∙ Manage and maintain daily operations of integrated SaaS platforms and associated interfaces and file transfers (e.g., SFTP activity, encryption keys, transfer credentials, file naming conventions). ∙ Monitor Control-M job queues and integration health indicators. ∙ Ensure timely, validated delivery of files and escalate transmission failures or vendor issues. ∙ Support compliance needs (e.g., HIPAA, SOC 2, SOX) by ensuring repeatable, auditable operations. ∙ Proven experience in Control-M Administration and scheduling, including job creation, monitoring, and troubleshooting. ∙ Experience with Control-M integration with other enterprise systems (e.g., databases, applications, cloud services like AWS and Azure). ∙ Knowledge of Linux, Unix, and Windows environments. ∙ Knowledge of ITIL processes and best practices is often desirable. ∙ Experience working in a SaaS environment and familiarity with concepts such as multi-cloud support, workflow observability, and site standards is crucial. ∙ Excellent communication and interpersonal skills to collaborate effectively with various stakeholders. Education A Bachelor's degree in Computer Science, Information Technology, or a related field, or equivalent work experience is often preferred
Posted 10 hours ago
0 years
0 Lacs
India
Remote
Machine Learning Intern (Paid) Company: Unified Mentor Location: Remote Duration: 3 months Opportunity: Full-time based on performance, with Certificate of Internship Application Deadline: 14th August 2025 About Unified Mentor Unified Mentor provides students and graduates with hands-on learning opportunities and career growth in Machine Learning and Data Science. Role Overview As a Machine Learning Intern, you will work on real-world projects, enhancing your practical skills in data analysis and model development. Responsibilities ✅ Design, test, and optimize machine learning models ✅ Analyze and preprocess datasets ✅ Develop algorithms and predictive models ✅ Use tools like TensorFlow, PyTorch, and Scikit-learn ✅ Document findings and create reports Requirements 🎓 Enrolled in or a graduate of a relevant program (Computer Science, AI, Data Science, or related field) 🧠 Knowledge of machine learning concepts and algorithms 💻 Proficiency in Python or R (preferred) 🤝 Strong analytical and teamwork skills Benefits 💰 Stipend: ₹7,500 - ₹15,000 (Performance-Based) (Paid) ✔ Hands-on machine learning experience ✔ Internship Certificate & Letter of Recommendation ✔ Real-world project contributions for your portfolio Equal Opportunity Unified Mentor is an equal-opportunity employer, welcoming candidates from all backgrounds.
Posted 10 hours ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
40098 Jobs | Dublin
Wipro
19612 Jobs | Bengaluru
Accenture in India
17156 Jobs | Dublin 2
EY
15921 Jobs | London
Uplers
11674 Jobs | Ahmedabad
Amazon
10661 Jobs | Seattle,WA
Oracle
9470 Jobs | Redwood City
IBM
9401 Jobs | Armonk
Accenture services Pvt Ltd
8745 Jobs |
Capgemini
7998 Jobs | Paris,France