Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
6.0 years
0 Lacs
Gurugram, Haryana, India
On-site
We are hiring a FortiSIEM Administrator to manage and maintain our SIEM infrastructure and security tools. The ideal candidate will have deep experience in SIEM architecture (FortiSIEM) , EDR , DLP , and a sound understanding of cybersecurity frameworks like MITRE ATT&CK, NIST, CIS Controls , and ISO 27001 . The role requires someone who can ensure complete visibility and protection of IT assets while supporting incident response and compliance. Tasks Deploy, configure, and maintain the FortiSIEM platform for real-time monitoring and alerting. Integrate log sources across firewalls, servers, endpoints, and cloud environments. Develop and manage SIEM rules, parsers, dashboards, and alerts. Operate and optimize EDR , DLP , and other advanced security tools. Conduct incident triage, investigation, and provide root cause analysis. Align monitoring and response activities with MITRE ATT&CK, NIST, CIS Controls , and ISO 27001 frameworks. Collaborate with SOC, infrastructure, and application teams for end-to-end threat visibility. Maintain updated documentation and support internal and external security audits. Ensure regular health checks, version upgrades, and platform tuning for performance Requirements Required Skills & Qualifications: 3–6 years of experience in cybersecurity with a focus on SIEM administration (preferably FortiSIEM) . Hands-on expertise in deploying and managing EDR , DLP , and other endpoint security tools. Good understanding of SIEM architecture , log ingestion, and threat correlation. Knowledge of networking fundamentals, TCP/IP, firewalls, VPNs, and IDS/IPS. Familiarity with security frameworks like MITRE ATT&CK, NIST, CIS Controls , and ISO 27001 . Scripting knowledge (PowerShell, Python, Bash) is an advantage. Fortinet certification (e.g., NSE 5/7) is a plus. Nice to Have: Experience with cloud platforms (AWS, Azure) and cloud security monitoring. Exposure to other SIEM tools (Splunk, QRadar, etc.) is beneficial. Experience in compliance-driven environments (PCI-DSS, SOC 2, etc.). Show more Show less
Posted 22 hours ago
5.0 years
0 Lacs
Coimbatore, Tamil Nadu, India
On-site
Job Description Job Title – ETL Testing – Python & SQL Candidate Specification – 5+ years, Open for Shift – 1PM to 10 PM. ETL (Python) – all 5 days WFO, ETL (SQL) – Hybrid. Location – Chennai. Job Description Experience in ETL testing or data warehouse testing. Strong in SQL Server, MySQL, or Snowflake. Strong in scripting languages Python. Strong understanding of data warehousing concepts, ETL tools (e.g., Informatica, Talend, SSIS), and data modeling. Proficient in writing SQL queries for data validation and reconciliation. Experience with testing tools such as HP ALM, JIRA, TestRail, or similar. Excellent problem-solving skills and attention to detail. Skills Required RoleETL Testing Industry TypeIT/ Computers - Software Functional AreaITES/BPO/Customer Service Required Education Bachelor Degree Employment TypeFull Time, Permanent Key Skills ETL PYTHON SQL Other Information Job CodeGO/JC/185/2025 Recruiter NameSheena Rakesh Show more Show less
Posted 22 hours ago
3.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Senior Data Scientist — Gen AI/ML Expert Location: Hybrid — Gurugram Company: Mechademy – Industrial Reliability & Predictive Analytics About Mechademy At Mechademy, we are redefining the future of reliability in rotating machinery with our flagship product, Turbomechanica . Built at the intersection of physics-based models, AI, and machine learning , Turbomechanica delivers prescriptive analytics that detect potential equipment issues before they escalate, maximizing uptime, extending asset life, and reducing operational risks for our industrial clients. The Role We are seeking a talented and driven Senior Data Scientist (AI/ML) with 3+ years of experience to join our AI team. You will play a critical role in building scalable ML pipelines, integrating cutting-edge language models, and developing autonomous agent-based systems that transform predictive maintenance is done for industrial equipment. This is a highly technical and hands-on role, with strong emphasis on real-world AI deployments — working directly with sensor data, time-series analytics, anomaly detection, distributed ML, and LLM-powered agentic workflows . What Makes This Role Unique Work on real-world industrial AI problems , combining physics-based models with modern ML/LLM systems. Collaborate with domain experts, engineers, and product leaders to directly impact critical industrial operations. Freedom to experiment with new tools, models, and techniques — with full ownership of your work. Help shape our technical roadmap as we scale our AI-first predictive analytics platform. Flexible hybrid work culture with high-impact visibility. Key Responsibilities Design & Develop ML Pipelines: Build scalable, production-grade ML pipelines for predictive maintenance, anomaly detection, and time-series analysis. Distributed Model Training: Leverage distributed computing frameworks (e.g. Ray, Dask, Spark, Horovod) for large-scale model training. LLM Integration & Optimization: Fine-tune, optimize, and deploy large language models (Llama, GPT, Mistral, Falcon, etc.) for applications like summarization, RAG (Retrieval-Augmented Generation), and knowledge extraction. Agent-Based AI Pipelines: Build intelligent multi-agent systems capable of reasoning, planning, and executing complex tasks via tool usage, memory, and coordination. End-to-End MLOps: Own the full ML lifecycle — from research, experimentation, deployment, monitoring to production optimization. Algorithm Development: Research, evaluate, and implement state-of-the-art ML/DL/statistical algorithms for real-world sensor data. Collaborative Development: Work closely with cross-functional teams including software engineers, domain experts, product managers, and leadership. Core Requirements 3+ years of professional experience in AI/ML, data science, or applied ML engineering. Strong hands-on experience with modern LLMs (Llama, GPT series, Mistral, Falcon, etc.), fine-tuning, prompt engineering, and RAG techniques. Familiarity with frameworks like LangChain, LlamaIndex , or equivalent for LLM application development. Practical experience in agentic AI pipelines : tool use, sequential reasoning, and multi-agent orchestration. Strong proficiency in Python (Pandas, NumPy, Scikit-learn) and at least one deep learning framework (TensorFlow, PyTorch, or JAX). Exposure to distributed ML frameworks (Ray, Dask, Horovod, Spark ML, etc.). Experience with containerization and orchestration (Docker, Kubernetes). Strong problem-solving ability, ownership mindset, and ability to work in fast-paced startup environments. Excellent written and verbal communication skills. Bonus / Good to Have Experience with time-series data, sensor data processing, and anomaly detection. Familiarity with CI/CD pipelines and MLOps best practices. Knowledge of cloud deployment, real-time system optimization, and industrial data security standards. Prior open-source contributions or active GitHub projects. What We Offer Opportunity to work on cutting-edge technology transforming industrial AI. Direct ownership, autonomy, and visibility into product impact. Flexible hybrid work culture. Professional development budget and continuous learning opportunities. Collaborative, fast-moving, and growth-oriented team culture. Health benefits and performance-linked rewards. Potential for equity participation for high-impact contributors. Note: Title and compensation will be aligned with the candidate’s experience and potential impact. Show more Show less
Posted 22 hours ago
7.0 years
0 Lacs
Greater Kolkata Area
On-site
Job Description Job Title: Automation Tester - Selenium, python, databricks Candidate Specification: 7 + years, Immediate to 30 days. Job Description Experience with Automated Testing. Ability to code and read a programming language (Python). Experience in pytest, selenium(python). Experience working with large datasets and complex data environments. Experience with airflow, Databricks, Data lake, Pyspark. Knowledge and working experience in Agile methodologies. Experience in CI/CD/CT methodology. Experience in Test methodologies. Skills Required RoleAutomation Tester Industry TypeIT/ Computers - Software Functional Area Required Education B Tech Employment TypeFull Time, Permanent Key Skills SELENIUM PYTHON DATABRICKS Other Information Job CodeGO/JC/100/2025 Recruiter NameSheena Rakesh Show more Show less
Posted 22 hours ago
2.0 years
0 Lacs
Gurugram, Haryana, India
On-site
The D. E. Shaw group is a global investment and technology development firm with more than $60 billion in investment capital as of March 1, 2024, and offices in North America, Europe, and Asia. Since our founding in 1988, our firm has earned an international reputation for successful investing based on innovation, careful risk management, and the quality and depth of our staff. We have a significant presence in the world’s capital markets, investing in a wide range of companies and financial instruments in both developed and developing economies. We are looking for an experienced engineer to join our GAITech team at our firm’s office in Hyderabad or Bengaluru. This role entails working as a part of a team that is focused on providing core AI infrastructure for the firm. The focus will be on improving areas including developer productivity, process efficiency, making DESCO data easy to use with LLMs, promoting generative AI for BU-specific adoption, etc. You will be expected to comprehend the technical requirements of diverse groups using AI, explore open-source tech options beneficial for us, and integrate the chosen tech within our teams to enhance efficiency and construct supportive systems. WHAT YOU'LL DO DAY-TO-DAY: In this position, you will work on overseeing the end-to-end development of generative AI tools and infrastructure. As an important part of the role, you will solve complex technical challenges, ensure project specifications are met efficiently, and facilitate the rapid learning and implementation of new technologies within the team. WHO WE’RE LOOKING FOR: Basic qualifications: A master’s or bachelor’s degree in computer science or a related technical field A minimum of 2 years of industry experience Experience in Python Exceptional problem-solving abilities and the capacity to acquire and apply new technologies quickly Excellent communication and people management skills Preferred qualifications: Experience or keen interest in the AI space Interested candidates can apply through our website: https://www.deshawindia.com/recruit/jobs/Adv/Link/SnrMemGAITechFeb25 We encourage candidates with relevant experience looking to restart their careers after a break to apply for this position. Learn about Recommence, our gender-neutral return-to-work initiative. The Firm offers excellent benefits, a casual, collegial working environment, and an attractive compensation package. For further information about our recruitment process, including how applicant data will be processed, please visit https://www.deshawindia.com/careers Members of the D. E. Shaw group do not discriminate in employment matters on the basis of sex, race, colour, caste, creed, religion, pregnancy, national origin, age, military service eligibility, veteran status, sexual orientation, marital status, disability, or any other protected class. Show more Show less
Posted 22 hours ago
10.0 - 19.0 years
40 - 65 Lacs
Hyderabad
Work from Office
As the Manager, Machine Learning Engineer you will be responsible for leading a team of talented ML engineers, guiding them in the design, development, and deployment of machine learning models. You will collaborate with cross-functional teams to ensure the successful integration of ML solutions into our products and services. Key Responsibilities: Lead and mentor a team of machine learning engineers, providing guidance and support in their professional development. Oversee the design and implementation of machine learning models using Python and relevant libraries (e.g., TensorFlow, PyTorch, scikit-learn). Manage the deployment and scaling of ML models on AWS infrastructure, utilizing services like SageMaker, EC2, and Lambda. Implement Infrastructure as Code (IaC) using Terraform to automate provisioning and configuration of ML environments. Collaborate with data scientists, product managers, and other stakeholders to align ML solutions with business objectives. Ensure the reliability, scalability, and performance of deployed models through continuous monitoring and optimization. Stay informed about the latest advancements in machine learning and AI technologies, and drive innovation within the team. Document processes and methodologies to ensure reproducibility and knowledge sharing across the organization. Qualifications: Bachelor's degree in Computer Science, Engineering, Mathematics, or a related field, or equivalent experience. Proven experience as a Machine Learning Engineer, with a track record of leading teams and projects. Strong proficiency in Python and machine learning libraries (e.g., TensorFlow, PyTorch, scikit-learn). Extensive experience with AWS services for deploying and managing ML models. Expertise in Terraform for infrastructure automation. Solid understanding of machine learning algorithms and techniques. Excellent leadership, communication, and collaboration skills. Preferred Qualifications: Master's degree or Ph.D. in a related field. Experience with other cloud platforms (e.g., Azure, Google Cloud). Knowledge of big data technologies (e.g., Hadoop, Spark). Familiarity with deep learning architectures and frameworks. Experience with version control systems like Git.
Posted 22 hours ago
12.0 years
0 Lacs
Coimbatore, Tamil Nadu, India
On-site
Job Description Senior Lead ( C++ ) – Bangalore At least 12 Years of experience building enterprise-grade distributed systems based on C++. Good understanding of modern C++ standards and STL is a must. Ability to compare/appreciate the pros and cons and fit-for-purpose between different technologies (e.g. Java vs C++ vs Python, SQL vs NoSQL) Expertise in Algorithms & Data structures with strong Computer Science fundamentals Experience in Relational Databases - SQL / Oracle / MySQL is preferred Knowledge Of Modern SDLC Practices, Agile Methodologies Tools Such As Jira, And Software Configuration Tools Such As GitHub And Familiarity With CI Processes Other Strong team player with a collaborative mindset Ability to maintain a proactive and positive attitude in a fast-paced, changing environment Thrives in a multi-cultural, global organization Open-minded, should be able to adapt to working in a multi-cultural team atmosphere Flexible to adapt to changing project needs driven by the customers Ability to think out of the box, develop tools to enhance productivity Education: University degree in computer science or related field or relevant experience Skills Required RoleSenior Lead - C++ Industry TypeIT/ Computers - Software Functional AreaIT-Software Required Education Degree Employment TypeFull Time, Permanent Key Skills C++ STL SQL / ORACLE / MYSQL Other Information Job CodeGO/JC/183/2025 Recruiter NameSheena Rakesh Show more Show less
Posted 22 hours ago
6.0 - 11.0 years
20 - 32 Lacs
Pune, Gurugram
Hybrid
Key Responsibilities: Design and develop ETL/ELT pipelines using Azure Data Factory , Snowflake , and DBT . Build and maintain data integration workflows from various data sources to Snowflake. Write efficient and optimized SQL queries for data extraction and transformation. Work with stakeholders to understand business requirements and translate them into technical solutions. Monitor, troubleshoot, and optimize data pipelines for performance and reliability. Maintain and enforce data quality, governance, and documentation standards. Collaborate with data analysts, architects, and DevOps teams in a cloud-native environment. Must-Have Skills: Strong experience with Azure Cloud Platform services. Proven expertise in Azure Data Factory (ADF) for orchestrating and automating data pipelines. Proficiency in SQL for data analysis and transformation. Hands-on experience with Snowflake and SnowSQL for data warehousing. Practical knowledge of DBT (Data Build Tool) for transforming data in the warehouse. Experience working in cloud-based data environments with large-scale datasets. Good-to-Have Skills: Experience with Azure Data Lake , Azure Synapse , or Azure Functions . Familiarity with Python or PySpark for custom data transformations. Understanding of CI/CD pipelines and DevOps for data workflows. Exposure to data governance , metadata management , or data catalog tools. Knowledge of business intelligence tools (e.g., Power BI, Tableau) is a plus.
Posted 22 hours ago
8.0 years
0 Lacs
Chennai, Tamil Nadu, India
Remote
Elevate your career journey by embracing a new challenge with Kinaxis. We are experts in tech, but it’s really our people who give us passion to always seek ways to do things better. As such, we’re serious about your career growth and professional development, because People matter at Kinaxis. In 1984, we started out as a team of three engineers based in Ottawa, Canada. Today, we have grown to become a global organization with over 2000 employees around the world, and support 40,000+ users in over 100 countries. As a global leader in end-to-end supply chain management, we enable supply chain excellence for all industries. We are expanding our team in Chennai and around the world as we continue to innovate and revolutionize how we support our customers. Our journey in India began in 2020 and we have been growing steadily since then! Building a high-trust and high-performance culture is important to us and we are proud to be Great Place to Work® CertifiedTM. Our state-of-the-art office, located in the World Trade Centre in Chennai, offers our growing team space for expansion and collaboration. Location India, Remote. As a member of our Consulting Team, you understand our customers’ most pressing business performance challenges and you are committed to helping our customers solve complex challenges in the distributed value chain that is prevalent in manufacturing today. What you will do Participate in deep-dive customer business requirements discovery sessions and develop requirements specifications documentation. Support Solution Architect in providing creative solutions to complex business problems while maintaining best practices. Guide and mentor junior consultants on the project team during project. Learn Maestro software and perform solution configuration. Perform training of customer end users on the configured solution. Understand supply chain industry trends and benchmark customer against the same. Ensure the customer is obtaining the business benefits as captured in the business case. Support the validation and testing of the solution and capture user feedback. Support data management and data integration related activities. Any other reasonable project related tasks as assigned by the Project Manager. Technologies we use Excellent problem solving and critical thinking skills. Technical skills such as SQL, R, Java Script, Python, etc. Experience with manufacturing planning solutions such as Kinaxis, SAP, JDA, etc What we are looking for A passion for working in customer facing roles and you have great interpersonal, communication, facilitation and presentation skills. 8 -12 years of relevant experience in manufacturing, production planning, demand management industry role and business software consulting role. BS/MS/PhD in Industrial Engineering, Supply Chain, Operations Research, Computer Science, Statistics or a related field with an excellent academic record. Good background in Supply Chain engineering concepts and understanding of statistical forecasting, inventory management, MRP, scheduling, etc. Ability to learn a new application – Maestro. Self-direction with ability to excel in a fast paced work environment. Work well in a team environment and have the ability to work effectively with people at all levels in an organization. Open to travel 75% on average and 100% occasionally and also can work effectively when working remotely from the client. Ability to communicate complex ideas effectively in English, both verbally and in writing. #Intermediate #Senior Work With Impact: Our platform directly helps companies power the world’s supply chains. We see the results of what we do out in the world every day—when we see store shelves stocked, when medications are available for our loved ones, and so much more. Work with Fortune 500 Brands: Companies across industries trust us to help them take control of their integrated business planning and digital supply chain. Some of our customers include Ford, Unilever, Yamaha, P&G, Lockheed-Martin, and more. Social Responsibility at Kinaxis: Our Diversity, Equity, and Inclusion Committee weighs in on hiring practices, talent assessment training materials, and mandatory training on unconscious bias and inclusion fundamentals. Sustainability is key to what we do and we’re committed to net-zero operations strategy for the long term. We are involved in our communities and support causes where we can make the most impact. People matter at Kinaxis and these are some of the perks and benefits we created for our team: Flexible vacation and Kinaxis Days (company-wide day off on the last Friday of every month) Flexible work options Physical and mental well-being programs Regularly scheduled virtual fitness classes Mentorship programs and training and career development Recognition programs and referral rewards Hackathons For more information, visit the Kinaxis web site at www.kinaxis.com or the company’s blog at http://blog.kinaxis.com . Kinaxis strongly encourages diverse candidates to apply to our welcoming community. We strive to make our website and application process accessible to any and all users. If you would like to contact us regarding the accessibility of our website or need assistance completing the application process, please contact us at recruitmentprograms@kinaxis.com . This contact information is for accessibility requests only and cannot be used to inquire about the status of applications. Kinaxis is committed to ensuring a fair and transparent recruitment process. We use artificial intelligence (AI) tools in the initial step of the recruitment process to compare submitted resumes against the job description, to identify candidates whose education, experience and skills most closely match the requirements of the role. After the initial screening, all subsequent decisions regarding your application, including final selection, are made by our human recruitment team. AI does not make any final hiring decisions Show more Show less
Posted 22 hours ago
6.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
About Company: Our Client is a multinational IT services and consulting company headquartered in USA, With revenues 19.7 Billion USD, with Global work force of 3,50,000 and Listed in NASDAQ, It is one of the leading IT services firms globally, known for its work in digital transformation, technology consulting, and business process outsourcing, Business Focus on Digital Engineering, Cloud Services, AI and Data Analytics, Enterprise Applications ( SAP, Oracle, Sales Force ), IT Infrastructure, Business Process Out Source. Major delivery centers in India, including cities like Chennai, Pune, Hyderabad, and Bengaluru. Offices in over 35 countries. India is a major operational hub, with as its U.S. headquarters. Job Title: Python ETL Tester Location: Chennai,Coimbatore,Bangalore,Hyderabad,Pune Experience: 6+ years Job Type : Contract to hire. Notice Period: Immediate joiners. Mandatory Skills: Experience in Python Automation, SQL and ETL testing. Job Description: 6 to 9 years of experience in Python Automation, SQL and ETL testing. Experience in Test Automation, defect reporting and tracking. Show more Show less
Posted 22 hours ago
5.0 years
0 Lacs
In, Tandjilé, Chad
On-site
Job Description Job Title – Azure Data Engineer Candidate Specification – 5+ years, Notice Period – Immediate to 30 days, Hybrid. Job Description Strong in Azure Data Factory (ADF), Azure Databricks. Experience in Azure Synapse Analytics, Azure Data Lake Storage (Gen2). Data Abse experience - Azure SQL Database / SQL Server. Proficiency in writing complex SQL queries and working with large datasets. Experience with Python, Scala, PySpark for data transformations. Knowledge of DevOps practices and tools (e.g., Azure DevOps, CI/CD for data pipelines). Skills Required RoleAzure Data Engineer Industry TypeIT/ Computers - Software Functional Area Required Education Bachelor Degree Employment TypeFull Time, Permanent Key Skills AZURE DATAFACTORY AZURE DATABRICKS PYTHON Other Information Job CodeGO/JC/186/2025 Recruiter Name Show more Show less
Posted 22 hours ago
3.0 - 5.0 years
10 - 15 Lacs
Hyderabad
Work from Office
Primary skills: 1. Proficiency in programming languages such as Python, PHP, etc. 2. Proficiency in front-end technologies such as CSS, HTML, and JavaScript. 3. Exposure to various operating systems such as Linux and Windows. 4. Good knowledge of front-end frameworks such as Bootstrap, Vue.JS, AngularJS, ReactJS, etc. 5. Expertise in back-end frameworks such as Express, NodeJS, Django, etc. 6. Familiarity with version control systems (e.g., Git) and CI/CD pipelines. 7. Proficiency in troubleshooting and solving complex issues. 8. Familiar with MySQL, PostgreSQL, and SQL databases. 9. Excellent written and verbal communication skills. 10. Ability to work independently and as part of a team. Secondary skills: 1. Knowledge of REST APIs and microservices architecture. 2. Familiar with agile project management concepts. 3. Excellent analytical and problem-solving skills. 4. Time management and collaboration skills.
Posted 22 hours ago
5.0 years
0 Lacs
Gurugram, Haryana, India
Remote
We are seeking a talented individual to join our Transformation team at Mercer. This role will be based in Gurgaon. This is a hybrid role that has a requirement of working at least three days a week in the office. Lead Specialist -Metrics, Analytics & Reporting We will count on you to Develop and maintain reports, dashboards, and scorecards that track key business metrics for Contact Centres. Gather data from multiple sources, ensuring accuracy and consistency in reporting. Analyse data to identify trends, patterns, and insights that can drive business decisions. Work closely with business units to understand their reporting needs and deliver tailored solutions. Continuously evaluate and improve reporting processes and tools for efficiency and effectiveness. Provide training and support to users on how to access and interpret reports. Collaborate with IT and data teams to ensure data quality and the integration of reporting systems. Manage ad-hoc reporting requests and deliver timely responses to business inquiries. What you need to have: Over 5+ years of experience in IT support services, with significant reporting experience in managing large-scale data & reporting requirements. Proven experience in a reporting or data analysis role. Experience in design, develop and deployment of rich Graphic visualizations with Drill Down and Drop up options using Power BI Experience in creating Power BI Reports using multiple sources. Responsible for deploying the dashboards into Power BI service (cloud-based business analytic service). Responsible for performed performance tuning on SQL Server queries and stored procedures Experience with Scheduled Automatic refresh and scheduling refresh in power bi service along with using Power BI gateway. Developed analysis reports and visualization using DAX functions like table function, aggregation function and iteration functions Deliver advanced/complex reporting solutions such as Dashboards and Standardized reports using Power BI Desktop Strong End to end experience in designing and deploying data visualizations using Power BI. Experience in using Python and R scripts in Power BI dashboards Experience with Advanced Reporting and Dashboards in Power BI. Strong proficiency in MS SQL Server and prior experience in MS SQL Server performance tuning- Advanced knowledge of T-SQL, including transactions, error handling, CTEs, Row_Number/Over, hierarchical data sets. - Excellent understanding of indexes, locks, execution plans and file stats- Conveys the designs to the software development teams via discussion, documentation and prototype code Ability to translate complex data into actionable insights for non-technical audiences. Good knowledge of IT Services functions and their responsibilities and strong analytical and problem-solving ability The ability to work and team effectively with business, management personnel, and diverse and geographically dispersed teams Basic reporting skills Possess innovative mindset by being open to new ideas and works comfortably with global teams What makes you stand out: Excellent English language skills (verbal and written), Excellent communication, collaboration and basic project management skills Good presentation skills with ability to present material clearly and concisely Excellent awareness of different cultures and working practices across the regions Proven experience in working in, and basic management of, diverse and geographically dispersed teams Why join our team: We help you be your best through professional development opportunities, interesting work and supportive leaders. We foster a vibrant and inclusive culture where you can work with talented colleagues to create new solutions and have impact for colleagues, clients and communities. Our scale enables us to provide a range of career opportunities, as well as benefits and rewards to enhance your well-being. Mercer believes in building brighter futures by redefining the world of work, reshaping retirement and investment outcomes, and unlocking real health and well-being. Mercer’s approximately 25,000 employees are based in 43 countries and the firm operates in over 130 countries. Mercer is a business of Marsh McLennan (NYSE: MMC), the world’s leading professional services firm in the areas of risk, strategy and people, with 85,000 colleagues and annual revenue of over $20 billion. Through its market-leading businesses including Marsh, Guy Carpenter and Oliver Wyman, Marsh McLennan helps clients navigate an increasingly dynamic and complex environment. For more information, visit mercer.com. Follow Mercer on LinkedIn and Twitter. Marsh McLennan is committed to embracing a diverse, inclusive and flexible work environment. We aim to attract and retain the best people regardless of their sex/gender, marital or parental status, ethnic origin, nationality, age, background, disability, sexual orientation, caste, gender identity or any other characteristic protected by applicable law. Marsh McLennan is committed to hybrid work, which includes the flexibility of working remotely and the collaboration, connections and professional development benefits of working together in the office. All Marsh McLennan colleagues are expected to be in their local office or working onsite with clients at least three days per week. Office-based teams will identify at least one “anchor day” per week on which their full team will be together in person. Show more Show less
Posted 22 hours ago
8.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Project Description: We are looking for a motivated and experienced Project Delivery Manager responsible for ensuring that our delivery squad delivers an exceptional customer experience that will result in increased business engagement and customer satisfaction. Responsibilities: - Leading the agile delivery team, managing conflict, and ensuring the teams processes and tasks are carried out efficiently. - Identifying customer needs and overseeing service delivery within the business context. - Determining ways to reduce costs without compromising customer satisfaction. - Manages area(s), customer(s), or team(s) of company employees with well- defined, limited scope, including directing daily work activities/priorities, people recruitment and development, cost management, and direction-setting within the area of responsibility. - Coaches and mentors employee direct reports. May coach and mentor non- direct reports as needed. - Manages team and individual performance. - Monitor and ensure organizational goals and contractual commitments are met (e.g. budget/cost, service availability, responses, reports). - Advise management and peers on matters of importance to area(s) of responsibility. Propose/influence direction-setting. - Resolve/monitor customer escalations as appropriate. - Establish and manage relationships with customer subject matter experts and appropriate customer management, with an objective of maintaining and building the business. Mandatory Skills Description: - 8+ years proven experience delivering complex Fixed Bid Projects - Experience in Trade Surveillance , Financial Crime is must - Must come from a techno-functional background with Python/Pyspark knowledge - Holistic knowledge of business processes and various scenarios , challenges and opportunities - Carve out and envisage business challenges and requirements to a comprehensive solution - Discuss and understand the project, purpose, goal, team, timelines, challenges, requirements, potential risks, its mitigation, plan, communication mode and high level customer expectations - Good communication and English language competency - Experience with managing project schedules, finances, risks and issues. - Close attention to detail and ability to sum up key messages for stakeholders - Weekly project updates, their completed tasks and review, plan for next action points - Understanding of project governance & Agile standards and procedures. - Ability and willingness to be flexible, adapting to the demands of the customers. - Technical knowledge to understand content of the products delivered Show more Show less
Posted 22 hours ago
4.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job Description: About Us At Bank of America, we are guided by a common purpose to help make financial lives better through the power of every connection. Responsible Growth is how we run our company and how we deliver for our clients, teammates, communities and shareholders every day. One of the keys to driving Responsible Growth is being a great place to work for our teammates around the world. We’re devoted to being a diverse and inclusive workplace for everyone. We hire individuals with a broad range of backgrounds and experiences and invest heavily in our teammates and their families by offering competitive benefits to support their physical, emotional, and financial well-being. Bank of America believes both in the importance of working together and offering flexibility to our employees. We use a multi-faceted approach for flexibility, depending on the various roles in our organization. Working at Bank of America will give you a great career with opportunities to learn, grow and make an impact, along with the power to make a difference. Join us! Global Business Services Global Business Services (GBS) delivers Technology and Operations capabilities to Lines of Business and Staff Support Functions of Bank of America through a centrally managed, globally integrated delivery model and globally resilient operations. Global Business Services is recognized for flawless execution, sound risk management, operational resiliency, operational excellence and innovation. In India, we are present in five locations and operate as BA Continuum India Private Limited (BACI), a non-banking subsidiary of Bank of America Corporation and the operating company for India operations of Global Business Services. Process Overview* Global Markets Business Finance & Control (BF&C) is a division of Global Banking & Markets (GBAM) Finance responsible for the production and independent validation of Global Markets' profit and loss and balance sheet. In this regard, BF&C will ensure, inter alia, that accounting policies are correctly and consistently applied, and that trading portfolios are appropriately valued. The team prepares and reports P&L and balance sheet to the business and ensures the accuracy and integrity of the general ledger. We are responsible for daily service delivery and ensuring effective controls, transparent management information and becoming a center of excellence delivering process simplification and efficiencies. We work closely with front office, middle office, traders and valuation control teams to drive the control agenda across the business Job Description* Global Markets Business Finance & Control (BF&C) seeks to provide a support service with particular focus on the relationship between trading risk positions and P&L components. We are looking to recruit an individual to work in the P&L production area covering the Global Rates Trading desks. The candidate’s main responsibility will be to produce and deliver the daily P&L to front office, providing a high standard of analysis and explanation around any issues faced and daily revenue drivers. The candidate will therefore have considerable interaction with the respective Front Office, Middle Office, and Finance and Trade Capture teams so good communication and time management skills are an essential requirement for this job Primary products covered will include Fixed Income, Money markets, FX, interest rate derivatives Responsibilities* Production and reporting of daily P&L to Front Office & Senior Management Reconcile actual P&L with trader estimates and provide flash/actual variance analysis Working closely with the trading desks on position, P&L or other issues on an ad-hoc basis Front-to-Back analysis & reconciliations of front office PnL and balance sheet to firm sub-ledgers Assist with execution of month-end controls ensuring management vs financial P&L variances are within thresholds Analyze traders’ risk positions and understand and apply the Greeks (Delta, Vega Gamma) vs daily market moves Would be typical own set of books / cost center and Business Units Liaise with various business partners such as Technology, Market Risk, Credit Risk, Operations and Finance to resolve issues / queries Development & continuous improvement of existing processes & workflow Testing / UAT for systems work ranging from minor system releases to major system implementations Remediation of issues in an autonomous yet timely manner considering the principles of control and the need to mitigate operational risk Requirements* Education* Qualified Chartered Accountant/CPA /CFA / MBA from Tier I/II institute with relevant experience in Product Control or Global Markets environment and organization of similar scale with US GAAP, IFRS, IAS reporting framework with an interest and aptitude for derivative products Certifications If Any CFA / FRM certified candidates would be preferred Advanced education and/or enhanced technical qualifications are a plus Experience Range* 4 to 8 years with at least 2+ years’ experience in Global Markets Product Control role Foundational skills* Proficiency in MS Office Suite; expert knowledge of Excel, Word, PowerPoint. Knowledge of Visual Basic, Access databases and macros will be an added advantage. The right individual will have strong people skills and can multi-task to manage the challenges of Finance processes yet have the awareness to escalate potential issues to their supervisor in a timely manner Candidate must have a proven track record of communicating effectively with personnel from various areas within an organization and at different management levels Must be proactive and be a highly motivated self-starter. Reactive and/or passive individuals need not apply Desired Skills Alteryx / Python / Tableau knowledge would be an added advantage Must be proactive and be a highly-motivated self-starter Effective communication skills with English proficiency Demonstrated ability to work in a high pressure environment Takes initiative and challenges existing processes and procedures in a proactive manner Strong team player Ability to analyze issues independently and derive solutions Analytical skills Inherent sense of principles of control through experience and sound judgment Reliability Work Timings* 12.30 IST to 21.30 IST Job Location* Gurugram/Hyderabad Show more Show less
Posted 22 hours ago
10.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Description Job Title – Senior Data Scientist Candidate Specification – 10+ years, Notice Period – Immediate to 30 days, Hybrid. Job Summary We are seeking a highly skilled and experienced Senior Data Scientist to join our advanced analytics team. The ideal candidate will possess strong statistical and machine learning expertise, hands-on programming skills, and the ability to transform data into actionable business insights. This role also requires domain understanding to align data science efforts with business objectives in industries such as Oil & Gas, Pharma, Automotive, Desalination, and Industrial Equipment . Primary Responsibilities Lead the design, development, and deployment of advanced machine learning and statistical models Analyze large, complex datasets to uncover trends, patterns, and actionable insights Collaborate cross-functionally with business, engineering, and domain teams to define analytical problems and deliver impactful solutions Apply deep understanding of business objectives to drive the application of data science in decision-making Ensure the quality, integrity, and governance of data used for modeling and analytics Guide junior data scientists and review code and models for scalability and accuracy Core Competencies (Primary Skills) Statistical Analysis & Mathematics Strong foundation in probability, statistics, linear algebra, and calculus Experience with hypothesis testing, A/B testing, and regression models Machine Learning & Deep Learning Proficient in supervised/unsupervised learning, ensemble techniques Hands-on experience with neural networks, NLP, and computer vision Business Acumen & Domain Knowledge Proven ability to translate business needs into data science solutions Exposure to domains such as Oil & Gas, Pharma, Automotive, Desalination, and Industrial Pumps/Motors Technical Proficiency Programming Languages: Python, R, SQL Libraries & Tools: Pandas, NumPy, Scikit-learn, TensorFlow, PyTorch Data Visualization: Matplotlib, Seaborn, Plotly, Tableau, Power BI MLOps & Deployment: Docker, Kubernetes, MLflow, Airflow Cloud & Big Data (Preferred): AWS, GCP, Azure, Spark, Hadoop, Hive, Presto Secondary Skills (Preferred) Generative AI: GPT-based models, fine-tuning, open-source LLMs, Agentic AI frameworks Project Management: Agile methodologies, sprint planning, stakeholder communication Skills Required RoleSenior Data Scientist - Contract Hiring Industry TypeIT/ Computers - Software Functional Area Required Education Bachelor Degree Employment TypeFull Time, Permanent Key Skills DEEP LEARNING MACHINE LAEARNING PYHTON S TATISTICAL ANALYSIS Other Information Job CodeGO/JC/375/2025 Recruiter NameChristopher Show more Show less
Posted 22 hours ago
8.0 - 13.0 years
9 - 19 Lacs
Hyderabad, Chennai, Bengaluru
Hybrid
Location : Chennai, Hyderabad, Bangalore, Kolkata, Pune Must Have : Performance Engineering + SAP Job Description: 6+ years of experience in performance engineering + SAP. Strong understanding of performance testing methodologies and tools. Experience with performance testing tools (e.g., JMeter, LoadRunner, Gatling). Experience with performance monitoring tools (e.g., AppDynamics, New Relic, Splunk, Dynatrace). Collaborate with development teams to identify and resolve performance issues. Expertise with AWR report analysis, heap dump, thread dump, JVM tuning Analyse application code and infrastructure to identify areas for optimization. Implement performance tuning techniques and best practices. Good to have: SAP performance testing and engineering experience Certifications in performance engineering or related fields. Proficiency in scripting languages (e.g., Python, Java, JavaScript). Knowledge of cloud technologies (AWS, Azure, GCP)
Posted 22 hours ago
6.0 years
60 - 65 Lacs
Surat, Gujarat, India
Remote
Experience : 6.00 + years Salary : INR 6000000-6500000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Crop.Photo) (*Note: This is a requirement for one of Uplers' client - Crop.Photo) What do you need for this opportunity? Must have skills required: MAM, App integration Crop.Photo is Looking for: Technical Lead for Evolphin AI-Driven MAM At Evolphin, we build powerful media asset management solutions used by some of the world’s largest broadcasters, creative agencies, and global brands. Our flagship platform, Zoom, helps teams manage high-volume media workflows—from ingest to archive—with precision, performance, and AI-powered search. We’re now entering a major modernization phase, and we’re looking for an exceptional Technical Lead to own and drive the next-generation database layer powering Evolphin Zoom. This is a rare opportunity to take a critical backend system that serves high-throughput media operations and evolve it to meet the scale, speed, and intelligence today’s content teams demand. What you’ll own Leading the re-architecture of Zoom’s database foundation with a focus on scalability, query performance, and vector-based search support Replacing or refactoring our current in-house object store and metadata database to a modern, high-performance elastic solution Collaborating closely with our core platform engineers and AI/search teams to ensure seamless integration and zero disruption to existing media workflows Designing an extensible system that supports object-style relationships across millions of assets, including LLM-generated digital asset summaries, time-coded video metadata, AI generated tags, and semantic vectors Driving end-to-end implementation: schema design, migration tooling, performance benchmarking, and production rollout—all with aggressive timelines Skills & Experience We Expect We’re looking for candidates with 7–10 years of hands-on engineering experience, including 3+ years in a technical leadership role. Your experience should span the following core areas: System Design & Architecture (3–4 yrs) Strong hands-on experience with the Java/JVM stack (GC tuning), Python in production environments Led system-level design for scalable, modular AWS microservices architectures Designed high-throughput, low-latency media pipelines capable of scaling to billions of media records Familiar with multitenant SaaS patterns, service decomposition, and elastic scale-out/in models Deep understanding of infrastructure observability, failure handling, and graceful degradation Database & Metadata Layer Design (3–5 yrs) Experience redesigning or implementing object-style metadata stores used in MAM/DAM systems Strong grasp of schema-less models for asset relationships, time-coded metadata, and versioned updates Practical experience with DynamoDB, Aurora, PostgreSQL, or similar high-scale databases Comfortable evaluating trade-offs between memory, query latency, and write throughput Semantic Search & Vectors (1–3 yrs) Implemented vector search using systems like Weaviate, Pinecone, Qdrant, or Faiss Able to design hybrid (structured + semantic) search pipelines for similarity and natural language use cases Experience tuning vector indexers for performance, memory footprint, and recall Familiar with the basics of embedding generation pipelines and how they are used for semantic search and similarity-based retrieval Worked with MLOps teams to deploy ML inference services (e.g., FastAPI/Docker + GPU-based EC2 or SageMaker endpoints) Understands the limitations of recognition models (e.g., OCR, face/object detection, logo recognition), even if not directly building them Media Asset Workflow (2–4 yrs) Deep familiarity with broadcast and OTT formats: MXF, IMF, DNxHD, ProRes, H.264, HEVC Understanding of proxy workflows in video post-production Experience with digital asset lifecycle: ingest, AI metadata enrichment, media transformation, S3 cloud archiving Hands-on experience working with time-coded metadata (e.g., subtitles, AI tags, shot changes) management in media archives Cloud-Native Architecture (AWS) (3–5 yrs) Strong hands-on experience with ECS, Fargate, Lambda, S3, DynamoDB, Aurora, SQS, EventBridge Experience building serverless or service-based compute models for elastic scaling Familiarity with managing multi-region deployments, failover, and IAM configuration Built cloud-native CI/CD deployment pipelines with event-driven microservices and queue-based workflows Frontend Collaboration & React App Integration (2–3 yrs) Worked closely with React-based frontend teams, especially on desktop-style web applications Familiar with component-based design systems, REST/GraphQL API integration, and optimizing media-heavy UI workflows Able to guide frontend teams on data modeling, caching, and efficient rendering of large asset libraries Experience with Electron for desktop apps How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 22 hours ago
10.0 - 15.0 years
20 - 25 Lacs
Bengaluru
Work from Office
Job Title: Senior Solution Architect Location : Bengaluru (WFO) Architectural Leadership: Serve as the Azure Solution Architect with specific sector knowledge, setting the direction for cloud architecture and ensuring alignment with the organization's technical strategy and O&G industry standards . Uphold industry best practices and standards specific to the O&G sector . Technology Roadmap: Construct and continuously update an Azure-focused technology roadmap, aligning with the organization's long-term goals. Explore and identify cutting-edge Azure services and features that can propel technological advancement. Strategically plan and implement upgrades to bolster the organization's competitive position and enhance the scalability of Azure-based solutions. Solution Design: Take the lead in design and architecture complex Azure solutions , with a strong focus on ensuring scalability, robust security, and cost-effectiveness and alignment with nuanced demands of the O&G industry. Stakeholder Engagement: Work in tandem with various service lines, such as engineering divisions and business stakeholders, to align Azure architectural strategies with the core business objectives and ensure the designs are in sync with the business's forward direction. Possess the ability to effectively communicate Azure technical strategies to non-technical stakeholders, thereby facilitating their participation in informed decision-making. Mentorship and Guidance: Offer Azure technical leadership and mentorship to solution squads. an environment of innovation, continuous improvement, and technical prowess across the organization. Cultivate Compliance and Best Practices: Guarantee Azure solutions meet regulatory demands and O&G-specific standards, including those related to safety, environment, and operations. Risk Assessment: Proactively identify and assess technical risks linked with Azure infrastructure and applications. Collaborate with multifaceted teams to formulate and implement measures to alleviate the detected risks. As a Solution Architect, it is crucial to pinpoint potential risks during the solution development phase and devise comprehensive risk mitigation plans for all solutions crafted. Industry Expertise: Stay informed about emerging technologies, trends, and standards in the oil and gas industry. Evaluate the potential impact of new technologies and provide recommendations for adoption both in upcoming solution designs as well as enhancement of solution architectures. Vendor Management: Engage with external vendors and technology associates to scrutinize third-party offerings compatible with Azure. Integrate these third-party solutions effortlessly, ensuring they complement and reinforce the broader Azure architectural strategy and the business objectives. Masters (or) Bachelor’s degree in Computer Science, Engineering, or Information Technology or a related field (or equivalent experience). Over all 8+ years of experience required for Solution Architect 8+ years of prior experience as a solution architect, preferably in the oil and gas industry or 8+ years of prior experience as a software engineer or similar role. Extensive experience in the oil and gas sector, including knowledge of industry-specific challenges and opportunities. Technical Expertise: Strong experience should have an in-depth understanding of Azure services and architecture, including IaaS, PaaS, and SaaS solutions. They should be adept at designing and implementing complex cloud infrastructure, ensuring scalability, reliability, and security. Also, strong experience in multi-cloud environments like AWS, GCP and their application migrations and management in O&G specifically. Advanced Problem-Solving: They must possess strong analytical skills to troubleshoot and resolve high-level technical issues. This includes the ability to perform root cause analysis and implement long-term solutions to prevent recurrence. Strategic Planning: The architect should be capable of developing strategic plans for cloud adoption and migration, aligning with the organization's goals. They should be able to evaluate and recommend new technologies and approaches to drive continuous improvement. Communication Skills: Excellent communication skills are essential for translating technical concepts to non-technical stakeholders, facilitating clear and effective discussions between cross-functional teams, and presenting proposals and progress reports to senior management. Client Engagement: Capable to work closely with clients (internal clients) to understand their business requirements and constraints, ensuring that the cloud solutions designed meet their needs and expectations. Innovation: With extensive experience, they should be at the forefront of innovation, exploring new cloud technologies and methodologies, and integrating them into the organization's practices to gain competitive advantage. Leadership and Mentorship: As a seasoned leader, the Senior Solution Architect should set the technical direction and make pivotal decisions that define the organization's cloud strategy, while also serving as a mentor to uplift junior architects and engineers. They must lead by example, inspiring teams through complex initiatives and fostering professional growth by imparting knowledge, best practices, and constructive feedback to nurture the next generation of technical experts. Must have Master’s or Bachelor’s degree in computer science engineering or information technology or Relevant field. Relevant certifications such as Microsoft Certified: Azure Solutions Architect Expert or similar. Microsoft AZ900 Certification & AZ 305 Certification TOGAF or ArchiMate or Zachman or equivalent architecture frameworks experience Experience in automation using Python, Gen AI, AI Ops, etc. Experience with data integration, data warehousing, and big data technologies. Experience with containerization and orchestration tools (e.g., any 2 of following: Docker, OpenShift, Kubernetes, ECS, GKE, AKS, EKS, Rancher, Apache Mesos, Nomad, Docker Swarm, Kubernetes). Understanding of the O&G sector's operational workflows, including the intricacies of exploration, extraction, refining, and distribution activities, to tailor cloud-based solutions that complement the industry's unique needs. Competence in tackling technical hurdles specific to the O&G domain, such as efficient asset management in isolated areas, processing extensive seismic datasets, and ensuring compliance with strict regulatory frameworks. Proficiency in leveraging Azure cloud technologies to enhance the O&G Industry's operational effectiveness, utilizing tools like IoT, advanced data analytics, and machine learning for better results Experience with CI/CD pipelines and automated testing frameworks (e.g. CircleCI, Jenkins, TeamCity, Travis CI, Bamboo, Bitbucket, etc.) Strong interpersonal skills with the ability to engage effectively with both technical and non-technical stakeholders. Role & responsibilities Preferred candidate profile
Posted 22 hours ago
4.0 - 9.0 years
15 - 30 Lacs
Hyderabad
Work from Office
As a Core ML Engineer you will be responsible for designing, developing, and deploying machine learning models that solve complex problems and enhance our product offerings. You will collaborate with cross-functional teams to integrate ML solutions into our systems and ensure their scalability and performance. Key Responsibilities: Design and implement machine learning models using Python and relevant libraries (e.g., TensorFlow, PyTorch, scikit-learn). Deploy and manage ML models on AWS infrastructure, utilizing services like SageMaker, EC2, and Lambda. Collaborate with data scientists and engineers to preprocess data and optimize model performance. Develop and maintain automated pipelines for model training, evaluation, and deployment. Monitor and evaluate model performance, making necessary adjustments to improve accuracy and efficiency. Stay up-to-date with the latest advancements in machine learning and AI technologies. Document processes and methodologies to ensure reproducibility and knowledge sharing. Qualifications: Bachelor's degree in Computer Science, Engineering, Mathematics, or a related field, or equivalent experience. Proven experience as a Machine Learning Engineer or in a similar role. Strong proficiency in Python and machine learning libraries (e.g., TensorFlow, PyTorch, scikit-learn). Experience with AWS services for deploying and managing ML models. Solid understanding of machine learning algorithms and techniques. Familiarity with data preprocessing and feature engineering. Excellent problem-solving skills and attention to detail. Strong communication and collaboration skills.
Posted 22 hours ago
5.0 years
0 Lacs
Fatepura, Gujarat, India
On-site
Description Position at Wind River Wind River is seeking an experienced test framework developer to join the eLxr development teams. The successful candidate will be responsible for the development, implementation, and certification of test environment setup for safety-critical featured products used by our Telecom, Aerospace, Industrial, and Automotive customers. As a team member, you will be involved with all aspects of the development of the test framework life cycle, from requirements development to implementation to verification. You will work closely with the product management team and system architects to understand and implement the requested features. The test support team is responsible for developing test environment and associated tests improving the software eco-system around eLxr, a Debian derivative, on platforms like Arm, IA etc. next generation of applications. Key Responsibilities Help develop test strategies and cases for eLxr, a Debian derivative, helping Wind River to grow in its role in new embedded and enterprise market segments. Take the initiative to improve features and processes. Contribute ideas for product improvements and iterations. Collaborate effectively with global software engineering teams. About You Core Competencies & Demonstrated Success At least 5+ years of experience in the development of test infrastructure. Driver and board-level system software test development and integration. Developing test frameworks or test cases using C, Python, Go, or other languages. Experience in testing and automation in agile and continuous delivery models. Ability to develop test cases based on high-level requirements, low-level requirements and test strategies possessing knowledge of input test variations. Experience in using LAVA, Git, Jira, and the Linux environment. Qualifications BE/BTech or ME/MTech degree in Computer Science, Electronics Engineering, or equivalent. 5+ years of software development, verification & validation experience. Strong in C, Python (Design Patterns), Go, and Debian Fundamentals. Experience in testing and automation in agile and continuous delivery models. Excellent English communication skills, both written and verbal. Excellent analytical and debugging skills. Good experience in developing testing automation framework / Test Manager Experience in working with Emulators Familiarity with CI/CD pipelines Successfully delivered to NA & EMEA customers Show more Show less
Posted 22 hours ago
5.0 years
0 Lacs
Greater Kolkata Area
Remote
Job Summary We're hiring top-tier backend talent in India to work on mission-critical services for international projects. Job Description Company Overview: Outsourced is a leading ISO certified India & Philippines offshore outsourcing company that provides dedicated remote staff to some of the world's leading international companies. Outsourced is recognized as one of the Best Places to Work and has achieved Great Place to Work Certification. We are committed to providing a positive and supportive work environment where all staff can thrive. As an Outsourced staff member, you will enjoy a fun and friendly working environment, competitive salaries, opportunities for growth and development, work-life balance, and the chance to share your passion with a team of over 1000 talented professionals. Job Summary We are hiring two (2) Senior Golang Developers who will play a key role in designing and optimizing backend systems. This is a fully remote role, ideal for engineers who thrive in distributed teams and are passionate about building scalable infrastructure using modern cloud-native technologies. Must-Have Skills Candidates Must Have 5+ years of professional experience in backend or systems development. Proficiency in Golang, capable of writing clean, scalable, production-ready code. Hands-on experience with AWS, including deployments, monitoring, and system scaling. Database expertise in both SQL and NoSQL systems, specifically: PostgreSQL Redis Strong knowledge of Kubernetes, particularly in container orchestration and service operations. Experience developing and maintaining high-traffic, high-availability systems. Understanding of concurrency and multithreading principles in Golang. Bachelor’s degree or Diploma in Computer Science or a related technical field. Nice-to-Have Skills Familiarity with PHP, Python, or Scala Experience with CI/CD pipelines, Docker, or distributed systems Knowledge of additional backend frameworks and cloud tools Key Responsibilities Architect, develop, and maintain scalable backend services using Golang. Collaborate with DevOps, QA, and cross-functional teams to deliver reliable software. Participate in code reviews, architectural discussions, and sprint planning. Troubleshoot and resolve issues in staging and production environments. Write automated tests and ensure high coverage and quality. Mentor junior developers and share best practices across the team. What We Offer Health Insurance: We provide medical coverage up to 20 lakh per annum, which covers you, your spouse, and a set of parents. This is available after one month of successful engagement. Professional Development: You'll have access to a monthly upskill allowance of ₹5000 for continued education and certifications to support your career growth. Leave Policy: Vacation Leave (VL): 10 days per year, available after probation. You can carry over or encash up to 5 unused days. Casual Leave (CL): 8 days per year for personal needs or emergencies, available from day one. Sick Leave: 12 days per year, available after probation. Flexible Work Hours or Remote Work Opportunities – Depending on the role and project. Outsourced Benefits such as Paternity Leave, Maternity Leave, etc. Show more Show less
Posted 22 hours ago
0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
DevOps Engineer L1 Role Overview: As a DevOps Engineer (L1), you will assist in automating processes, managing deployments, and maintaining the infrastructure. This role is ideal for someone with foundational knowledge of DevOps principles who is eager to grow in a fast-paced environment. Key Responsibilities: Support and maintain CI/CD pipelines to streamline deployments. Monitor application performance and troubleshoot issues. Perform routine tasks such as server monitoring, log analysis, and backup management. Collaborate with development teams to ensure smooth releases. Maintain and optimize cloud infrastructure (e.g., AWS, Azure, or GCP). Ensure basic security measures, including firewall configuration and patch management. Qualifications: Bachelor’s degree in Computer Science, IT, or a related field. Experience with CI/CD tools like Jenkins, GitLab, or CircleCI. Basic knowledge of cloud platforms (AWS, Azure, or GCP). Familiarity with Linux/Unix systems and scripting languages (e.g., Bash, Python). Understanding of containerization (Docker) and orchestration tools (Kubernetes is a plus). Good problem-solving skills and a willingness to learn. Show more Show less
Posted 22 hours ago
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Description Job Title: Data Engineer Candidate Specification: 5 + years, Immediate to 30 days. (All 5 Days work from office for 9 Hours). Job Description Experience with any modern ETL tools (PySpark or EMR, or Glue or others). Experience in AWS, programming knowledge in python, Java, Snowflake. Experience in DBT, StreamSets (or similar tools like Informatica, Talend), migration work done in the past. Agile experience is required with Version One or Jira tool expertise. Provide hands-on technical solutions to business challenges & translates them into process/ technical solutions. Good knowledge of CI/CD and DevOps principles. Experience in data technologies - Hadoop PySpark / Scala (Any one) Skills Required RoleData Engineer Industry TypeIT/ Computers - Software Functional AreaIT-Software Required Education B Tech Employment TypeFull Time, Permanent Key Skills PYSPARK. EMR GLUE ETL TOOL AWS CI/CD DEVOPS Other Information Job CodeGO/JC/102/2025 Recruiter NameSheena Rakesh Show more Show less
Posted 22 hours ago
6.0 years
60 - 65 Lacs
Kolkata, West Bengal, India
Remote
Experience : 6.00 + years Salary : INR 6000000-6500000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Crop.Photo) (*Note: This is a requirement for one of Uplers' client - Crop.Photo) What do you need for this opportunity? Must have skills required: MAM, App integration Crop.Photo is Looking for: Technical Lead for Evolphin AI-Driven MAM At Evolphin, we build powerful media asset management solutions used by some of the world’s largest broadcasters, creative agencies, and global brands. Our flagship platform, Zoom, helps teams manage high-volume media workflows—from ingest to archive—with precision, performance, and AI-powered search. We’re now entering a major modernization phase, and we’re looking for an exceptional Technical Lead to own and drive the next-generation database layer powering Evolphin Zoom. This is a rare opportunity to take a critical backend system that serves high-throughput media operations and evolve it to meet the scale, speed, and intelligence today’s content teams demand. What you’ll own Leading the re-architecture of Zoom’s database foundation with a focus on scalability, query performance, and vector-based search support Replacing or refactoring our current in-house object store and metadata database to a modern, high-performance elastic solution Collaborating closely with our core platform engineers and AI/search teams to ensure seamless integration and zero disruption to existing media workflows Designing an extensible system that supports object-style relationships across millions of assets, including LLM-generated digital asset summaries, time-coded video metadata, AI generated tags, and semantic vectors Driving end-to-end implementation: schema design, migration tooling, performance benchmarking, and production rollout—all with aggressive timelines Skills & Experience We Expect We’re looking for candidates with 7–10 years of hands-on engineering experience, including 3+ years in a technical leadership role. Your experience should span the following core areas: System Design & Architecture (3–4 yrs) Strong hands-on experience with the Java/JVM stack (GC tuning), Python in production environments Led system-level design for scalable, modular AWS microservices architectures Designed high-throughput, low-latency media pipelines capable of scaling to billions of media records Familiar with multitenant SaaS patterns, service decomposition, and elastic scale-out/in models Deep understanding of infrastructure observability, failure handling, and graceful degradation Database & Metadata Layer Design (3–5 yrs) Experience redesigning or implementing object-style metadata stores used in MAM/DAM systems Strong grasp of schema-less models for asset relationships, time-coded metadata, and versioned updates Practical experience with DynamoDB, Aurora, PostgreSQL, or similar high-scale databases Comfortable evaluating trade-offs between memory, query latency, and write throughput Semantic Search & Vectors (1–3 yrs) Implemented vector search using systems like Weaviate, Pinecone, Qdrant, or Faiss Able to design hybrid (structured + semantic) search pipelines for similarity and natural language use cases Experience tuning vector indexers for performance, memory footprint, and recall Familiar with the basics of embedding generation pipelines and how they are used for semantic search and similarity-based retrieval Worked with MLOps teams to deploy ML inference services (e.g., FastAPI/Docker + GPU-based EC2 or SageMaker endpoints) Understands the limitations of recognition models (e.g., OCR, face/object detection, logo recognition), even if not directly building them Media Asset Workflow (2–4 yrs) Deep familiarity with broadcast and OTT formats: MXF, IMF, DNxHD, ProRes, H.264, HEVC Understanding of proxy workflows in video post-production Experience with digital asset lifecycle: ingest, AI metadata enrichment, media transformation, S3 cloud archiving Hands-on experience working with time-coded metadata (e.g., subtitles, AI tags, shot changes) management in media archives Cloud-Native Architecture (AWS) (3–5 yrs) Strong hands-on experience with ECS, Fargate, Lambda, S3, DynamoDB, Aurora, SQS, EventBridge Experience building serverless or service-based compute models for elastic scaling Familiarity with managing multi-region deployments, failover, and IAM configuration Built cloud-native CI/CD deployment pipelines with event-driven microservices and queue-based workflows Frontend Collaboration & React App Integration (2–3 yrs) Worked closely with React-based frontend teams, especially on desktop-style web applications Familiar with component-based design systems, REST/GraphQL API integration, and optimizing media-heavy UI workflows Able to guide frontend teams on data modeling, caching, and efficient rendering of large asset libraries Experience with Electron for desktop apps How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 22 hours ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Python has become one of the most popular programming languages in India, with a high demand for skilled professionals across various industries. Job seekers in India have a plethora of opportunities in the field of Python development. Let's delve into the key aspects of the Python job market in India:
The average salary range for Python professionals in India varies based on experience levels. Entry-level positions can expect a salary between INR 3-6 lakhs per annum, while experienced professionals can earn between INR 8-20 lakhs per annum.
In the field of Python development, a typical career path may include roles such as Junior Developer, Developer, Senior Developer, Team Lead, and eventually progressing to roles like Tech Lead or Architect.
In addition to Python proficiency, employers often expect professionals to have skills in areas such as: - Data Structures and Algorithms - Object-Oriented Programming - Web Development frameworks (e.g., Django, Flask) - Database management (e.g., SQL, NoSQL) - Version control systems (e.g., Git)
__str__
and __repr__
methods in Python. (medium)__init__
method in Python? (basic)append()
and extend()
methods in Python lists? (basic)__name__
variable in Python? (medium)pass
statement in Python? (basic)As you explore Python job opportunities in India, remember to brush up on your skills, prepare for interviews diligently, and apply confidently. The demand for Python professionals is on the rise, and this could be your stepping stone to a rewarding career in the tech industry. Good luck!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.