Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
4.0 - 8.0 years
0 Lacs
thrissur, kerala
On-site
As a DevOps Engineer with 4+ years of experience, you will play a crucial role in optimizing software development and deployment processes, enhancing system reliability, and implementing automation to streamline our infrastructure. Your primary responsibility will be to collaborate with development, operations, and cross-functional teams to ensure the efficient delivery of high-quality software products. Your key responsibilities will include developing and maintaining automation scripts and tools for provisioning, configuration, and deployment of infrastructure and applications. You will also be involved in implementing and managing Continuous Integration/Continuous Deployment (CI/CD) pipelines, creating and managing infrastructure using Infrastructure as Code (IaC) tools, building and managing containerized applications, setting up monitoring and alerting systems, implementing security best practices, working closely with development teams, optimizing infrastructure for cost-efficiency and scalability, troubleshooting infrastructure issues, and maintaining comprehensive documentation. To qualify for this role, you should possess a Bachelor's degree in Computer Science, Information Technology, or a related field, along with 4+ years of experience as a DevOps Engineer or in a similar role. Proficiency in scripting languages such as Python and Bash, familiarity with CI/CD tools and version control systems like Jenkins and Git, knowledge of containerization and orchestration technologies like Docker and Kubernetes, experience with cloud platforms such as AWS, Azure, or Google Cloud, strong problem-solving skills, excellent communication, and teamwork abilities are required. Preference will be given to candidates from Kerala. If you are passionate about optimizing software development processes, enhancing system reliability, and working collaboratively in a fast-paced environment, we invite you to join our team at Webandcrafts. We look forward to learning more about your candidacy through this application.,
Posted 15 hours ago
3.0 - 7.0 years
0 Lacs
karnataka
On-site
As a Manager, ML Ops Engineer at Genpact, you will be an integral part of the team responsible for building and maintaining the infrastructure and pipelines for cutting-edge Generative AI applications. Your role will involve collaborating closely with the Generative AI Full Stack Architect to ensure the efficiency, scalability, and reliability of Generative AI models in production. Your responsibilities will include designing, developing, and implementing ML/LLM pipelines for generative AI models, conducting research to explore new techniques, integrating GenAI APIs and microservices into existing systems, and automating ML tasks across the model lifecycle using tools like GitOps, CI/CD pipelines, and containerization technologies. Furthermore, you will be involved in MLOps support and maintenance on ML platforms Dataiku/Sagemaker, implementing version control, CI/CD pipelines, and containerization techniques, designing monitoring and alerting systems, and collaborating with infrastructure and DevOps teams for resource management. To qualify for this role, you should have a Bachelor's degree in computer science, Data Science, Engineering, or a related field, along with a deep understanding of generative AI concepts and experience in MLOps or related areas. Strong expertise in cloud platforms, CI/CD principles, containerization technologies, and monitoring tools is essential. Excellent communication, collaboration, and problem-solving skills are also required, along with a passion for Generative AI and its potential to transform industries. If you are looking to join a dynamic team at Genpact and contribute to the advancement of Generative AI technologies, we encourage you to apply for the Manager, ML Ops Engineer position in India-Bangalore. This is a full-time role with the opportunity to work on cutting-edge projects and make a significant impact in the field of AI. Join us on this exciting journey to shape the future with your expertise in ML Ops and Generative AI!,
Posted 20 hours ago
3.0 - 7.0 years
0 Lacs
karnataka
On-site
As an integral part of the SAP team, you will have the opportunity to contribute to our mission of helping the world run better. Our organizational culture is built on collaboration and a collective commitment to making a positive impact. At SAP, we strive to lay the groundwork for the future while fostering an inclusive work environment that values diversity and flexibility, all while remaining dedicated to our purpose-driven and forward-thinking approach. We offer a supportive and nurturing team dynamic that emphasizes continuous learning and professional growth, acknowledges individual achievements, and provides a range of benefits to cater to your needs. Your responsibilities will involve working with a diverse array of SAP technologies and products, particularly focusing on the development of SAP Host Agent. Collaborating closely with a motivated team, you will be instrumental in enhancing security, alerting, monitoring, and high-availability features for SAP clients globally. This role will also entail close cooperation with various SAP and SAP Partner technologies, including operating systems and database management systems. Moreover, you will have the opportunity to expand your expertise in hybrid and cloud operations within Hyperscaler environments such as Microsoft Azure, Google Cloud, and AWS. The SAP ABAP Platform unit plays a pivotal role in the enterprise application space, setting industry benchmarks and ensuring a dedicated ABAP environment within the SAP Business Technology Platform. Your involvement in this unit will contribute significantly to SAP's reputation as a leader in end-to-end business application software and related services, encompassing database management, analytics, intelligent technologies, and experience management. We are a purpose-driven cloud company with a global reach, comprising millions of users and a diverse workforce united by a collaborative ethos and a shared commitment to personal and professional development. At SAP, you will have the platform to unleash your full potential and make a meaningful impact. Our culture at SAP is rooted in inclusivity, well-being, and adaptable work arrangements that empower every individual, irrespective of their background, to thrive and excel. We believe that our strength lies in the unique skills and attributes each person brings to our organization, and we invest in our employees to instill confidence and unlock their talents. By fostering an environment that celebrates diversity and supports personal growth, we aim to create a more equitable and inclusive world. SAP is an equal opportunity employer and advocates for accessibility for applicants with physical or mental disabilities. Should you require any accommodations during the application process, please reach out to our Recruiting Operations Team at Careers@sap.com. For SAP employees, please note that only permanent positions are eligible for the SAP Employee Referral Program, subject to the guidelines outlined in the SAP Referral Policy. Specific terms and conditions may apply to roles within Vocational Training programs. As part of our commitment to Equal Employment Opportunity, SAP embraces diversity and inclusion and provides reasonable accommodations to candidates with disabilities. Candidates selected for this role may be subject to background verification by an external vendor. Requisition ID: 432356 | Work Area: Software-Design and Development | Expected Travel: 0 - 10% | Career Status: Professional | Employment Type: Regular Full Time | Additional Locations: #LI-Hybrid.,
Posted 1 day ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
You are a talented Site Reliability Engineering Manager with a passion for distributed storage systems. You will be part of a focused team at Apple, bringing distributed storage technologies to Apple's infrastructure. Your role is crucial as Apple operates at a huge scale and your impact will be enormous. The mission is to power storage behind many of Apple's most popular services, and with your passion and dedication, there are no limits to what you can achieve. As the Storage SRE organization seeks a strong engineering leader to manage Storage focused SRE teams, you will work closely with peer SRE teams and development partners. Your responsibilities include building and optimizing the Storage stack from the bare metal to the top of the application. This involves designing provisioning systems, code deployment, monitoring, alerting, and performance improvements. Together with your team, you will help run the storage used by some of Apple's largest teams. Minimum Qualifications for this role include a Bachelor's or Master's degree in Computer Science, Engineering, or a related field. You should have proven experience in a leadership role within an SRE or DevOps team, with a specific passion for distributed storage. A strong background in distributed systems, storage architectures, and data management is essential. Deep knowledge of SRE principles, including monitoring, alerting, error budgets, fault analysis, and other common reliability engineering concepts is required. Leading initiatives to enhance the scalability and performance of distributed storage systems is also part of the role, along with collaborating with engineering teams to design and implement robust and scalable storage solutions. Preferred Qualifications include experience with Kubernetes, Docker, and containerization, as well as proficiency in at least one of these programming languages: Golang, Java, or Rust. Knowledge of distributed storage (block storage) or similar large-scale distributed databases is beneficial. Familiarity with CI/CD pipelines and infrastructure as code (Terraform, Ansible), knowledge of security best practices, and compliance requirements in storage systems are also advantageous. An understanding of data durability, consistency models, and storage performance optimization techniques is a plus. Education & Experience requirements are not specified in the job description.,
Posted 1 day ago
3.0 - 7.0 years
0 Lacs
karnataka
On-site
As an Observability Architect, you will play a crucial role in ensuring end-to-end visibility into the performance and health of our systems. Your primary focus will be on utilizing tools such as Dynatrace, Splunk, New Relic, Prometheus, and Grafana to set up monitoring, logging, and alerting mechanisms. In this role, you will be responsible for designing and implementing observability solutions that provide real-time insights into the behavior of our applications and infrastructure. By leveraging Dynatrace and Splunk, you will be able to proactively identify and address any performance issues or bottlenecks. You will collaborate closely with cross-functional teams to integrate observability best practices into our development and deployment processes. Your expertise in utilizing these key tools and platforms will enable us to maintain high system reliability and performance. Overall, your role as an Observability Architect will be instrumental in enhancing our ability to monitor, analyze, and optimize the performance of our systems, ultimately ensuring a seamless user experience for our customers.,
Posted 2 days ago
5.0 - 9.0 years
0 Lacs
hyderabad, telangana
On-site
As a key member of the team, you will drive operational excellence by setting clear goals, priorities, and performance metrics. You will play a crucial role in encouraging professional development and fostering knowledge sharing within the team. Your responsibilities will include overseeing the automation of operational tasks such as provisioning, deployment, monitoring, and incident response. It will be your duty to ensure that robust monitoring, logging, and alerting systems are in place to proactively identify and address any issues before they impact customers. Join us in this dynamic role and make a real impact on our operational efficiency and customer satisfaction.,
Posted 2 days ago
5.0 - 9.0 years
0 Lacs
hyderabad, telangana
On-site
As a member of our dynamic team, you will play a pivotal role in revolutionizing customer relationship management (CRM) by leveraging advanced artificial intelligence (AI) capabilities. The groundbreaking partnership between Salesforce and Google Cloud, valued at $2.5 billion, aims to enhance customer experiences through the integration of Google's Gemini AI models into Salesforce's Agentforce platform. By enabling businesses to utilize multi-modal AI capabilities for processing images, audio, and video, we are paving the way for unparalleled customer interactions. Join us in advancing the integration of Salesforce applications on the Google Cloud Platform (GCP). This is a unique opportunity to work at the forefront of identity provider (IDP), AI, and cloud computing, contributing to the development of a comprehensive suite of Salesforce applications on GCP. You will be instrumental in building a platform on GCP to facilitate agentic solutions on Salesforce. Our Public Cloud engineering teams are at the forefront of innovating and maintaining a large-scale distributed systems engineering platform. Responsible for delivering hundreds of features daily to tens of millions of users across various industries, our teams ensure high reliability, speed, security, and seamless preservation of customizations and integrations with each deployment. If you have deep experience in concurrency, large-scale systems, data management, high availability solutions, and back-end system optimization, we want you on our team. Your Impact: - Develop cloud infrastructure automation tools, frameworks, workflows, and validation platforms on public cloud platforms like AWS, GCP, Azure, or Alibaba - Design, develop, debug, and operate resilient distributed systems spanning thousands of compute nodes across multiple data centers - Utilize and contribute to open-source technologies such as Kubernetes, Argo, etc. - Implement Infrastructure-as-Code using Terraform - Create microservices on containerization frameworks like Kubernetes, Docker, Mesos - Resolve complex technical issues, drive innovations to enhance system availability, resilience, and performance - Maintain a balance between live-site management, feature delivery, and technical debt retirement - Participate in on-call rotation to address real-time complex problems and ensure services are operational and highly available Required Skills: - Proficiency in Terraform, Kubernetes, or Spinnaker - Deep knowledge of programming languages such as Java, Golang, Python, or Ruby - Working experience with Falcon - Ownership and operation of critical service instances - Experience with Agile development and Test Driven Development methodologies - Familiarity with essential infrastructure services including monitoring, alerting, logging, and reporting applications - Preferred experience with Public Cloud platforms If you are passionate about cutting-edge technologies, thrive in a fast-paced environment, and are eager to make a significant impact in the world of CRM and AI, we welcome you to join our team.,
Posted 2 days ago
3.0 - 7.0 years
0 Lacs
navi mumbai, maharashtra
On-site
The Executive - Service Help Desk (Communication) serves as the primary central point of contact and communication hub for all operational activities, incidents, and service requests within a 24/7 mission critical data center environment. You are responsible for ensuring timely, accurate, professional, and consistent information flow between internal operations teams, external clients, vendors, and management. Your role will play a crucial part in maintaining transparency and managing expectations during critical events and routine service delivery. You will act as the central communication point during all operational incidents such as power outages, cooling system failures, network disruptions, and security breaches. Your responsibilities include disseminating real-time incident updates, status reports, and resolution notifications to predefined internal stakeholders and external clients through various communication channels like email, SMS, and conference bridge calls. It is essential to ensure that all communications are clear, concise, accurate, and adhere to service level agreements for incident notification. Handling incoming service requests, logging, categorizing, and prioritizing them from clients and internal teams will be part of your routine. You will create and update incident and service request tickets accurately in the designated platform, assign tickets to the appropriate operational teams, monitor progress, follow up with resolution teams, and provide timely updates to stakeholders. Additionally, you will manage inbound and outbound communication with data center clients regarding service status, planned maintenance schedules, incident updates, and operational queries. Polite, professional, and accurate responses to client inquiries, along with managing client expectations regarding response and resolution times, are crucial aspects of your role. Furthermore, effective coordination with internal operations teams, acting as the first line of escalation for communication issues or delays, facilitating communication between shifts during handover, preparing comprehensive daily shift handover reports, and maintaining communication contact lists, escalation matrices, and standard operating procedures are part of your responsibilities. You should have 3-5 years of experience in a Service Help Desk, Call Center, Network Operations Center (NOC), or similar customer-facing communication role. Gleeds, the company you will be working for, is a global property and construction consultancy with over 150 years of expertise, operating in 28 countries worldwide. They drive innovation, sustainability, and value, delivering transformative projects that shape communities and redefine the built environment.,
Posted 2 days ago
5.0 - 9.0 years
0 Lacs
noida, uttar pradesh
On-site
As a GCP Data Engineer with expertise, you will be responsible for managing, maintaining, and troubleshooting cloud data pipelines. The ideal candidate should have over 5 years of industry experience in Data Engineering support and enhancement. You will need to be proficient in any Cloud Platform services (GCP, Azure, AWS, etc.) and have a strong understanding of data pipeline architectures and ETL processes. Your role will involve leveraging your excellent Python programming skills for data processing and automation, along with SQL query writing skills for data analysis and experience with relational databases. Additionally, familiarity with version control systems like Git is required. Your responsibilities will include analyzing, troubleshooting, and resolving complex data pipeline issues. You will utilize your software engineering experience to optimize data pipelines, improve performance, and enhance reliability. It is essential to continuously optimize data pipeline efficiency, reduce operational costs, automate repetitive tasks in data processing, and monitor and alert for Data Pipelines. You will be expected to perform SLA-oriented monitoring for critical pipelines and implement improvements post-business approval for SLA adherence if needed. Moreover, your role will involve monitoring the performance and reliability of data pipelines, Informatica ETL workflows, MDM, and Control-M jobs. Conducting post-incident reviews, implementing improvements for data pipelines, and developing/maintaining documentation for data pipeline systems and processes are crucial aspects of the job. Experience with Data Visualization using Google Looker Studio, Tableau, Domo, Power BI, or similar tools is considered an added advantage. To qualify for this position, you should possess a Bachelor's degree in Computer Science or a related technical field, or equivalent practical experience. Holding any Cloud Professional Data Engineer certification will be an added advantage. Excellent verbal and written communication skills are necessary for effective collaboration and documentation. Strong problem-solving and analytical skills are key to addressing challenges in data engineering. TELUS Digital is an equal opportunity employer committed to creating a diverse and inclusive workplace that values merit, competence, and performance without regard to any characteristic related to diversity.,
Posted 2 days ago
5.0 - 9.0 years
0 Lacs
pune, maharashtra
On-site
StoneX Ltd is an FCA authorized and regulated firm that excels in trade execution, clearing, and advisory services, primarily focusing on Commodities and Foreign Exchange sectors. As a proud member of the Fortune 500 StoneX Inc. family, we provide comprehensive services globally, spanning Commodities, Capital Markets, Currencies, and Asset Management. Our global team operates across Europe, the US, and Asia Pacific, where innovative minds collaborate in cross-functional teams to shape the future of financial markets. Technology is at the core of our competitive edge, driving innovation and value creation. Our engineering teams leverage cutting-edge tools to deliver impactful solutions to production at pace, through rapid iteration and close collaboration with business stakeholders. We are currently seeking a hands-on Senior Software Engineer with experience in building high-performing, scalable, enterprise-grade applications. In this role, you will be involved in architecture and development across all tiers of the application stack, focusing on low-latency mission-critical applications. You will work with a talented team of engineers, collaborating on application architecture, development, testing, and design. Additionally, you will technically lead a team of highly skilled software engineers and build relationships with key stakeholders across a diverse user base. Key responsibilities include: - Primary focus on server-side development - Contributing to all phases of the development lifecycle within an Agile methodology - Writing well-designed, testable, efficient code - Ensuring designs align with specifications - Preparing and releasing software components - Supporting continuous improvement by exploring new technologies for architectural review Qualifications required for this role include: - Extensive experience in developing complex distributed event-based microservices using Java/Spring - Designing and maintaining robust, scalable, high-performance Java applications - Developing Restful APIs, gRPC services - Experience with containerization (Docker, Kubernetes) and cloud platforms (Azure, AWS) - Exposure to distributed messaging/streaming platforms (Apache Kafka) - Building CI/CD pipelines (Azure DevOps, GHE) - Familiarity with TDD/BDD, testing frameworks - Strong knowledge of Relational Databases SQL and No-SQL databases - Working as part of a global Agile team - Knowledge of Reactive programming is a plus - Proficiency in English Standout factors for this role include: - Minimum 5 years of experience, ideally within Financial services or FinTech - Experience with DataDog observability, APM, Alerting - Open Policy Agent (OPA) experience Education: - BS/MS degree in Computer Science, Engineering, or a related subject Working Environment: - Hybrid (2 days from home, 3 days from the office),
Posted 2 days ago
10.0 - 14.0 years
0 Lacs
karnataka
On-site
We are looking for a skilled technical leader capable of developing tools and services to enhance the test automation, test reporting, and test debugging processes for our team of automation engineers. Your role will involve guiding the automation of test infrastructure provisioning, scaling, and more. Additionally, as part of the team, you will be responsible for building frameworks to facilitate the integration of automated testing into CI/CD pipelines across various languages and frameworks. Your technical expertise and leadership will play a crucial role in fostering a culture of site reliability, test automation, shared ownership, and transparency. Your responsibilities will include building and supporting tools and services to enhance our automated test platform, researching and implementing ways to improve user experience and reduce manual tasks, leading infrastructure automation efforts, spearheading test automation frameworks and CI/CD integration, managing test environments and infrastructure, promoting agile processes and fast release cycles, architecting monitoring and alerting systems for comprehensive test lifecycle observability, developing playbooks for incident response and disaster recovery, and instilling a culture of site reliability, shared ownership, and automation throughout the organization. You will also be involved in technical design reviews, code quality processes, and utilizing GenAI/ML tools for test development and triage processes. The ideal candidate will have a strong problem-solving ability, a passion for building usable and scalable systems, the ability to collaborate effectively across teams, a sense of responsibility and ownership, excellent communication skills, comfort with ambiguity, and a curiosity for constant learning and professional growth. Additionally, you should possess over 10 years of experience in product quality, automation, and/or DevOps, hold a Bachelor's or Master's degree in Computer Science, Engineering, or a related field, demonstrate hands-on experience in developing, deploying, and securing services, particularly in regulated environments. Experience with software development productivity metrics, infrastructure provisioning using code and scripts, networking, big data technologies, databases, Linux administration, microservices, distributed systems, performance optimizations, public cloud providers, and VMWare is preferred. Experience in cybersecurity and AI/ML testing would be an added advantage. If you are excited about tackling complex challenges, driving innovation, and leading technical initiatives to enhance test automation processes, we encourage you to apply for this role and be a part of our dynamic team.,
Posted 3 days ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
As a Site Reliability Engineering Manager at Apple, you will be a part of a dynamic team dedicated to bringing distributed storage technologies to Apple's infrastructure. Your role will involve managing Storage-focused SRE teams, collaborating closely with peer SRE teams, and development partners. You will play a pivotal role in building and optimizing the Storage stack, ranging from bare metal to application layers. This includes designing provisioning systems, code deployment strategies, monitoring, alerting, and performance enhancements. Your contributions will be instrumental in running the storage infrastructure utilized by some of Apple's largest teams. To excel in this role, you must possess a Bachelor's or Master's degree in Computer Science, Engineering, or a related field. Additionally, you should have proven experience in a leadership position within an SRE or DevOps team, with a specific focus on distributed storage systems. A strong background in distributed systems, storage architectures, and data management is essential. Deep knowledge of SRE principles, such as monitoring, alerting, error budgets, fault analysis, and other reliability engineering concepts, will be beneficial in this role. Your responsibilities will also include leading initiatives to enhance the scalability and performance of distributed storage systems and collaborating with engineering teams to implement robust and scalable storage solutions. Preferred qualifications for this role include experience with Kubernetes, Docker, and containerization, proficiency in programming languages like Golang, Java, or Rust, and knowledge of distributed storage or large-scale distributed databases. Familiarity with CI/CD pipelines and infrastructure as code tools like Terraform and Ansible, along with an understanding of security best practices and compliance requirements in storage systems, will be advantageous. Moreover, a grasp of data durability, consistency models, and storage performance optimization techniques will further enhance your effectiveness in this role. Join Apple's Storage SRE organization and be part of a team that is revolutionizing the storage solutions behind some of Apple's most popular services. Your passion and dedication can make a significant impact on the scale and efficiency of Apple's infrastructure. Embrace this opportunity to contribute to innovative products, services, and customer experiences that define Apple's commitment to excellence.,
Posted 3 days ago
4.0 - 8.0 years
0 Lacs
haryana
On-site
As a SAP BTP Expert with 4-5 years of experience, your role will involve setting up SAP BTP accounts, configuring diverse runtimes and services, and managing security and authorization. You should have comprehensive expertise in service authentication, trust configuration, SAP Cloud Connector, Cloud Foundry, alert and notification services, transport services, CI/CD automation, and connectivity with on-premises systems. Qualifications - Bachelor's degree in Computer Science, Information Technology, or a related field. - 4-5 years of hands-on experience in SAP BTP account setup, runtime and service configuration, security, and authorization. - Proficiency in scripting languages and automation tools. - Excellent communication and collaboration skills. - Problem-solving mindset with the ability to troubleshoot complex issues. Responsibilities - Configure SAP BTP accounts for various projects, considering scalability, security, and performance requirements. - Expertise in configuring different runtimes and services on SAP BTP, including database services, application services, and integration services. - Implement robust security measures and define authorization roles and policies for users and applications. - Configure authentication mechanisms for SAP BTP services and establish trust relationships between components. - Implement and manage SAP Cloud Connector for secure communication between SAP BTP Cloud Foundry and on-premises systems. - Manage applications on SAP BTP Cloud Foundry, collaborating with development teams for deployment and scaling. - Configure and manage alerting and notification services for proactive monitoring of SAP BTP applications. - Set up transport services for smooth movement of applications and configurations across SAP BTP environments. - Design and implement CI/CD pipelines, automating build, test, and deployment processes. - Collaborate with cross-functional teams to understand project requirements and provide technical expertise. - Document SAP BTP configurations, security measures, and best practices. - Diagnose and resolve issues related to SAP BTP configurations, runtimes, services, and security. - Provide timely support to ensure continuous operation. If you are a results-driven SAP BTP Expert with a comprehensive skill set and proven experience in the above-mentioned areas, we invite you to apply and contribute to our innovative SAP BTP projects.,
Posted 3 days ago
4.0 - 8.0 years
0 Lacs
ahmedabad, gujarat
On-site
YipitData is the leading market research and analytics firm for the disruptive economy, having recently raised up to $475M from The Carlyle Group at a valuation over $1B. For three years and counting, YipitData has been recognized as one of Inc's Best Workplaces. As a fast-growing technology company with offices in various locations including NYC, Austin, Miami, and more, we cultivate a people-centric culture focused on mastery, ownership, and transparency. As a Web Crawling Specialist [Official, Internal Title: Data Solutions Engineer] at YipitData, you will play a pivotal role in designing, refactoring, and maintaining web scrapers that power critical reports across the organization. Reporting directly to the Data Solutions Engineering Manager, your contributions will ensure that data ingestion processes are resilient, efficient, and scalable, directly supporting multiple business units and products. In this role, you will be responsible for overhauling existing scraping scripts to improve reliability, maintainability, and efficiency. You will implement best coding practices to ensure quality and sustainability. Additionally, you will utilize sophisticated fingerprinting methods to avoid detection and blocking, handle dynamic content, navigate complex DOM structures, and manage session/cookie lifecycles effectively. Collaborating with cross-functional teams, you will work closely with analysts and stakeholders to gather requirements, align on targets, and ensure data quality. You will provide support to internal users of web scraping tooling by offering troubleshooting, documentation, and best practices to ensure efficient data usage for critical reporting. Your responsibilities will also include developing monitoring solutions, alerting frameworks to identify and address failures, evaluating scraper performance, diagnosing bottlenecks, and scaling issues. You will propose new tooling, methodologies, and technologies to enhance scraping capabilities and processes, staying up to date with industry trends and evolving bot-detection tactics. This fully-remote opportunity based in India offers standard work hours from 11am to 8pm IST with flexibility. Effective communication in English, 4+ years of experience with web scraping frameworks, a strong understanding of HTTP, RESTful APIs, HTML parsing, browser rendering, and TLS/SSL mechanics, expertise in advanced fingerprinting and evasion strategies, and troubleshooting skills are key to succeeding in this role. At YipitData, we offer a comprehensive compensation package, including benefits, perks, and a competitive salary. We prioritize your personal life with offerings such as vacation time, parental leave, team events, and learning reimbursement. Your growth at YipitData is determined by the impact you make, fostering an environment focused on ownership, respect, and trust.,
Posted 4 days ago
4.0 - 8.0 years
0 Lacs
noida, uttar pradesh
On-site
At TELUS Digital, you will play a crucial role in enabling customer experience innovation by fostering spirited teamwork, embracing agile thinking, and embodying a caring culture that prioritizes customers. As the global arm of TELUS Corporation, a leading telecommunications service provider in Canada, we specialize in delivering contact center and business process outsourcing solutions to major corporations across various sectors such as consumer electronics, finance, telecommunications, and utilities. With our extensive global call center capabilities, we offer secure infrastructure, competitive pricing, skilled resources, and exceptional customer service, all supported by TELUS, our multi-billion dollar parent company. In this role, you will leverage your expertise in Data Engineering, backed by a minimum of 4 years of industry experience, to drive the success of our projects. Proficiency in Google Cloud Platform (GCP) services including Dataflow, BigQuery, Cloud Storage, and Pub/Sub is essential for effectively managing data pipelines and ETL processes. Your strong command over the Python programming language will be instrumental in performing data processing tasks efficiently. You will be responsible for optimizing data pipeline architectures, enhancing performance, and ensuring reliability through your software engineering skills. Your ability to troubleshoot and resolve complex pipeline issues, automate repetitive tasks, and monitor data pipelines for efficiency and reliability will be critical in maintaining operational excellence. Additionally, your familiarity with SQL, relational databases, and version control systems like Git will be beneficial in streamlining data management processes. As part of the team, you will collaborate closely with stakeholders to analyze, test, and enhance the reliability of GCP data pipelines, Informatica ETL workflows, MDM, and Control-M jobs. Your commitment to continuous improvement, SLA adherence, and post-incident reviews will drive the evolution of our data pipeline systems. Excellent communication, problem-solving, and analytical skills are essential for effectively documenting processes, providing insights, and ensuring seamless operations. This role offers a dynamic environment where you will have the opportunity to work in a 24x7 shift, contributing to the success of our global operations and making a meaningful impact on customer experience.,
Posted 4 days ago
3.0 - 7.0 years
0 Lacs
haryana
On-site
As a Data Platform Support Engineer, your main responsibility will be to ensure the smooth execution of data pipelines and prompt resolution of issues to uphold business continuity. You will play a crucial role in maintaining the health and reliability of data systems by conducting root cause analysis, implementing proactive measures, and minimizing disruptions. Your key accountabilities will include monitoring and managing the performance of Azure Data Factory pipelines, Databricks workflows, and SQL databases to guarantee seamless data processing. You will troubleshoot and resolve production incidents in Azure-based data pipelines, conduct root cause analysis, and implement preventive measures. Additionally, you will oversee and optimize the performance of Databricks notebooks and clusters to support efficient data transformations and analytics. It will be essential for you to ensure the reliability and scalability of data integration workflows by utilizing Azure-native monitoring tools and alerts. Collaborating with development teams to deploy and support new Azure Data Factory pipelines, SQL scripts, and Databricks jobs into production will also be part of your responsibilities. Maintaining compliance with data governance, security, and backup policies across the Azure platform is crucial. Furthermore, you will need to coordinate with stakeholders to provide clear updates on production incidents, resolutions, and performance improvements. Planning and executing disaster recovery and failover strategies for Azure Data Factory, Databricks, and SQL components is essential to ensure business continuity. Documenting operational processes, troubleshooting steps, and best practices for the Azure platform will be necessary to build a comprehensive knowledge base. Your technical skills should include expertise in Azure Data Factory and Databricks, proficiency in SQL, experience in monitoring and alerting using Azure Monitor and Log Analytics, strong incident management skills, knowledge of data governance and security standards, experience in process improvement, and proficiency in documentation. In summary, as a Data Platform Support Engineer, you will play a critical role in maintaining the health and reliability of data systems, ensuring seamless data processing, and implementing proactive measures to minimize disruptions. Your expertise in Azure Data Factory, Databricks, SQL, monitoring and alerting, incident management, data governance, process improvement, and documentation will be key to your success in this role.,
Posted 6 days ago
0.0 - 4.0 years
0 Lacs
chennai, tamil nadu
On-site
As a Splunk Infrastructure Management professional, you will be responsible for designing, deploying, and managing a Splunk environment tailored to civil engineering data. Your role will involve integrating various civil engineering data sources into Splunk, such as sensor data from bridges, traffic data, or construction site information. You will be tasked with creating and maintaining dashboards and reports within Splunk to visualize and analyze civil engineering data. Additionally, you will set up alerts and monitoring systems within Splunk to track key performance indicators (KPIs) related to infrastructure projects. Your responsibilities will also include diagnosing and resolving issues related to the Splunk environment and data ingestion. Collaboration will be a key aspect of your role as you will work closely with civil engineers, project managers, and other stakeholders to understand their needs and provide Splunk solutions that meet their requirements. This is a full-time, permanent position suitable for fresher candidates. The work location is in person, ensuring effective communication and collaboration with the team and stakeholders.,
Posted 1 week ago
1.0 - 5.0 years
0 Lacs
karnataka
On-site
You will be responsible for proactively monitoring data pipeline operations as an L1 DataOps Monitoring Engineer. Your main tasks will include identifying issues, raising alerts, and ensuring timely communication and escalation to minimize data downtime and enhance reliability. Your attention to detail and proactive approach will be key in maintaining the efficiency and integrity of our data operations.,
Posted 1 week ago
5.0 - 10.0 years
0 Lacs
pune, maharashtra
On-site
As an experienced Platform Engineer with a focus on Infrastructure as Code (IaC), DevOps practices, and orchestration tools, you will play a pivotal role in leading resilient engineering initiatives across various technology domains. Your responsibilities will encompass overseeing the design and implementation of robust engineering solutions in both cloud-based and on-premises environments. You will also spearhead chaos engineering efforts to proactively identify and address potential system weaknesses, ensuring high availability and seamless disaster recovery processes. Collaboration will be a key aspect of your role as you engage with diverse teams within the organization to align and prioritize resiliency and recovery efforts. Your expertise in automation tools such as Ansible will be instrumental in streamlining processes and enhancing the overall resiliency posture of the technology organization. Additionally, you will be actively involved in incident response and recovery processes, integrating post mortem analyses to identify areas for improvement. Your extensive experience in platform engineering, coupled with a Bachelor's degree or equivalent qualification, will be invaluable in architecting and deploying enterprise-level solutions that prioritize system uptime and data integrity. Your ability to design systems that support massive transaction volumes and facilitate seamless disaster recovery will be put to the test as you navigate the complexities of multi-AZ and multi-Region cloud platforms. Furthermore, your proficiency in chaos engineering principles, observability solutions, and Agile development methodologies will be crucial in driving continuous improvement and resilience within the technology organization. Your dedication to customer needs, combined with excellent communication skills, will enable you to build lasting relationships and articulate complex resilience strategies effectively. If you have a proven track record of success in managing mission-critical systems, a strong technical background in infrastructure and service architecture, and a passion for driving innovation in resiliency and recovery, we invite you to join our team and make a significant impact on our technology landscape.,
Posted 1 week ago
10.0 - 14.0 years
0 Lacs
karnataka
On-site
As a Senior Staff Software Engineer, you will operate at the highest levels of technical depth, architectural design, and strategic influence. This role goes beyond writing code; you will shape the technical vision, drive engineering excellence, and mentor teams to solve complex, large-scale challenges in fintech. Your technical skills should include extensive hands-on experience in software development with proficiency in multiple languages such as Java, Python, C++, Go, etc. You must have a strong understanding of software architecture, design patterns, and best practices. Additionally, expertise in scalable, distributed systems and microservices architecture is essential. Deep knowledge of real-time transaction processing and high-throughput systems is a must. Experience with cloud platforms like AWS, GCP, Azure, and containerization tools like Docker and Kubernetes is required. You should have at least 10 years of professional software development experience. In terms of leadership and decision-making, you should have a proven ability to lead and mentor engineering teams, fostering a culture of technical excellence. Experience in making architectural decisions that impact large-scale systems is crucial. You must possess a strong ability to align technical strategies with business goals and long-term vision. Problem-solving and operational excellence are key aspects of this role. You should have strong analytical and debugging skills, with experience in troubleshooting high-scale production systems. Your ability to drive continuous improvement in performance, reliability, and scalability is essential. Experience with monitoring, alerting, and resilience engineering is also required. Communication and collaboration skills are equally important. You should have excellent communication skills and be capable of explaining technical concepts to non-technical stakeholders. Your ability to work across cross-functional teams, including product, business, and compliance, is necessary for success in this role. Preferred skills include knowledge of the fintech domain, such as understanding lending platforms, wealth management, or embedded financial services. Experience in policy management systems, claims automation, and underwriting workflows for insurance is advantageous. Familiarity with regulatory compliance, security, and governance in fintech would also be beneficial for this position.,
Posted 1 week ago
5.0 - 12.0 years
0 Lacs
pune, maharashtra
On-site
About Northern Trust: Northern Trust is a globally recognized, award-winning financial institution that has been in continuous operation since 1889. The organization takes pride in offering innovative financial services and guidance to successful individuals, families, and institutions while upholding principles of service, expertise, and integrity. With over 130 years of experience and more than 22,000 partners, Northern Trust serves sophisticated clients worldwide with leading technology and exceptional service. Job Summary: Northern Trust is seeking an experienced Manager of Technology Resilience & Automation to lead the automation, orchestration, and continuous improvement of the Technology and Infrastructure Resilience Process. The role focuses on enhancing the efficiency, reliability, and effectiveness of Disaster Recovery (DR) Operations through automation to ensure rapid recovery of critical systems and minimize downtime. The ideal candidate will possess expertise in disaster recovery planning, automation frameworks, IT Infrastructure, on-premise and cloud-based recovery solutions, and regulatory compliance requirements. This individual will play a critical role in identifying risks, developing mitigation strategies, and collaborating with cross-functional teams to maintain the security and resilience of the business during unforeseen disruptions. Key Responsibilities: - Disaster Recovery Automation and Strategy: Develop and implement an automated DR framework to enhance failover and recovery speed, integrate automation into DR Runbooks, testing, and execution, optimize Recovery Time Objective (RTO) and Recovery Point Objective (RPO) through automation, collaborate with Infrastructure teams to enhance DR capabilities, and ensure DR plans meet standards and compliance requirements. - Automation & Tooling Implementation: Review requirements, approve design artifacts, strategize and utilize organization infrastructure tools to automate DR processes, lead DR automation solutions across different environments, and enhance monitoring and alerting capabilities for DR automation. - DR Testing & Validation: Conduct DR tests, failover drills, and resilience simulations using automation, monitor and analyze test results for improvements, collaborate with relevant departments for alignment between DR, authentication, and security strategies, lead DR efforts during disruptions, and maintain documentation to support automation capabilities. - Communication, Collaboration & Leadership: Lead a team focused on DR Automation, serve as a subject matter expert, provide guidance and training, develop and deliver effective presentations, communicate key metrics professionally, facilitate meetings with stakeholders, and maintain a technical network across multiple service areas. Qualifications: - Bachelor's degree or equivalent experience. - Strong knowledge of IT automation strategies, tools, and frameworks. - Proven experience in disaster recovery and business continuity planning. - Excellent analytical and problem-solving skills. - Strong communication and interpersonal skills. - Experience in a global organization across multiple countries and time zones. - Ability to work effectively under pressure. - Knowledge of relevant regulations and compliance standards. Experience: - Minimum 12+ years in Management or Team Lead role in IT. - Minimum 5 years in disaster recovery, business continuity planning, or point-in-time recovery planning. - Practical experience in Agile development. - Hands-on experience in leading DR automation projects. - Strong communications, analytical, problem-solving, and incident response skills. - Experience in leading disaster recovery exercises and response efforts. - Management soft skills including team building, conflict resolution, and strategic planning. Join Northern Trust: Northern Trust offers a flexible and collaborative work culture, encourages movement within the organization, provides accessibility to senior leaders, and commits to assisting the communities it serves. If you are interested in working for a sustainable and admired company, consider building your career with Northern Trust today. Reasonable accommodation: Northern Trust is dedicated to working with individuals with disabilities and providing reasonable accommodations. If you require accommodations during the employment process, please contact the HR Service Center at MyHRHelp@ntrs.com. Apply today to explore opportunities for flexible working and contribute to a diverse and inclusive workplace where different perspectives are valued. #MadeForGreater.,
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
hyderabad, telangana
On-site
About the Role You will be joining the team working on Uber Direct, a rapidly growing product in the business-to-consumer (B2C) space. As a part of this team, you will be involved in developing features for the Uber Direct product, which is tailored for businesses. By leveraging our Dashboard product or integrating with our public API, merchants can easily set up delivery services on their websites or apps. Your role will focus on building scalable, high-quality features that enhance the delivery experience for our customers. What You Will Do Your responsibilities will include: - Developing highly scalable and quality product features that are integral to our customers" daily usage - Writing well-documented, maintainable, and scalable code - Collaborating with cross-functional teams (Data, Design, Product) to solve problems and drive product development - Designing data-driven architecture and systems - Establishing reliable alerting and monitoring mechanisms for the products you work on What You Will Need To excel in this role, you should have: - At least 3 years of experience in full-stack software engineering - Proficiency in Golang, Java, or similar programming languages - Demonstrated experience in collaborative work and leading cross-functional teams - Previous experience in delivering high-quality products at scale - Willingness to take ownership of products, considering aspects of operations, maintenance, and reliability - Ability to mentor and guide junior engineers Preferred Qualifications Additionally, the following qualifications are preferred: - Experience in working on cross-team initiatives - Familiarity with micro-service architecture - Self-driven mindset to identify opportunities for improvement and efficiency,
Posted 1 week ago
4.0 - 8.0 years
0 Lacs
noida, uttar pradesh
On-site
You will be responsible for solution design, architecture blueprints, cost estimates of components, and detailed documentation. Proactively identifying data-driven cost optimization opportunities for customers and supporting their team to achieve the same will be a key part of your role. You will also need to perform proof of concept on new services/features launched by AWS and integrate them with existing systems for improved performance and cost savings. Independently reviewing client infrastructure, conducting cost optimization audits, and well-architected reviews to identify cost inefficiencies like underutilized resources, architectural pitfalls, and pricing options will be crucial. Implementing governance standards such as resource tagging, account structure, provisioning, permissions, and access is also part of the job. Building a cost-aware ecosystem and enhancing cost visibility through alerting and reporting will be essential tasks. To be successful in this role, you should have a B.E/B.Tech/MCA degree with a minimum of 4+ years of experience working on the AWS cloud. A deep understanding of AWS cloud offerings and consumption models is required, along with proficiency in scripting languages like Python and Bash. Experience in DevOps practices and effective communication skills to engage stakeholders ranging from entry-level to C-suite is necessary. It would be advantageous if you have experience with third-party cost optimization tools like CloudCheckr, CloudAbility, CloudHealth, etc. Additionally, familiarity with AWS billing constructs including pricing options like On-demand, Reserved/Savings Plan, Spot, Cost and Usage Reports, and AWS Cost Management Tools would be beneficial. Possessing certifications such as AWS Certified SysOps Associate, AWS Certified Solutions Architect Associate, AWS Certified Solutions Architect Professional, or AWS Certified DevOps Professional is a plus. Prior experience in client communications, being a self-starter, and the ability to deliver under critical timelines are desirable traits for this role.,
Posted 1 week ago
8.0 - 12.0 years
0 Lacs
karnataka
On-site
As a Site Reliability Engineering (SRE) Technical Leader on the Network Assurance Data Platform (NADP) team at Cisco ThousandEyes, you will be responsible for ensuring the reliability, scalability, and security of the cloud and big data platforms. Your role will involve representing the NADP SRE team, contributing to the technical roadmap, and collaborating with cross-functional teams to design, build, and maintain SaaS systems operating at multi-region scale. Your efforts will be crucial in supporting machine learning (ML) and AI initiatives by ensuring the platform infrastructure is robust, efficient, and aligned with operational excellence. You will be tasked with designing, building, and optimizing cloud and data infrastructure to guarantee high availability, reliability, and scalability of big-data and ML/AI systems. This will involve implementing SRE principles such as monitoring, alerting, error budgets, and fault analysis. Additionally, you will collaborate with various teams to create secure and scalable solutions, troubleshoot technical problems, lead the architectural vision, and shape the technical strategy and roadmap. Your role will also encompass mentoring and guiding teams, fostering a culture of engineering and operational excellence, engaging with customers and stakeholders to understand use cases and feedback, and utilizing your strong programming skills to integrate software and systems engineering. Furthermore, you will develop strategic roadmaps, processes, plans, and infrastructure to efficiently deploy new software components at an enterprise scale while enforcing engineering best practices. To be successful in this role, you should have relevant experience (8-12 yrs) and a bachelor's engineering degree in computer science or its equivalent. You should possess the ability to design and implement scalable solutions, hands-on experience in Cloud (preferably AWS), Infrastructure as Code skills, experience with observability tools, proficiency in programming languages such as Python or Go, and a good understanding of Unix/Linux systems and client-server protocols. Experience in building Cloud, Big data, and/or ML/AI infrastructure is essential, along with a sense of ownership and accountability in architecting software and infrastructure at scale. Additional qualifications that would be advantageous include experience with the Hadoop Ecosystem, certifications in cloud and security domains, and experience in building/managing a cloud-based data platform. Cisco encourages individuals from diverse backgrounds to apply, as the company values perspectives and skills that emerge from employees with varied experiences. Cisco believes in unlocking potential and creating diverse teams that are better equipped to solve problems, innovate, and make a positive impact.,
Posted 1 week ago
7.0 - 11.0 years
0 Lacs
chennai, tamil nadu
On-site
The Tech Lead Quantitative Trading position in Chennai, India requires a candidate with over 7 years of experience to undertake various key responsibilities. You will be responsible for designing and optimizing scalable backend systems using Python and C++, overseeing the deployment of real-time trading algorithms, managing cloud infrastructure, CI/CD pipelines, and API integrations, as well as leading and mentoring a high-performing engineering team. Additionally, you will play a crucial role in laying the foundation for AI-driven trading innovations. Your role demands strong leadership, software architecture expertise, and hands-on problem-solving skills to ensure the seamless execution and scalability of trading systems. As part of your responsibilities, you will lead the end-to-end development of the trading platform, ensuring scalability, security, and high availability. You will also architect and optimize backend infrastructure for real-time algorithmic trading and large-scale data processing, design and implement deployment pipelines and CI/CD workflows for efficient code integration, and introduce best practices for performance tuning, system reliability, and security. In the realm of backend and data engineering, you will own the Python-based backend, work on low-latency system design to support algorithmic trading strategies, optimize storage solutions for handling large-scale financial data, and implement API-driven architectures leveraging WebSocket API and RESTful API knowledge to integrate with brokers, third-party data sources, and trading systems. Furthermore, you will be responsible for monitoring and troubleshooting live trading systems to minimize downtime, handling broker communication during execution issues and API failures, setting up automated monitoring, logging, and alerting for production stability, leading, mentoring, and scaling a distributed engineering team, defining tasks, setting deadlines, and managing workflow using Zoho Projects, aligning team objectives with OKRs, driving execution, fostering a strong engineering culture, ensuring high performance and technical excellence, managing cloud infrastructure to ensure high availability, setting up monitoring, logging, and automated alerting for production stability, overseeing GitLab repositories, enforcing best practices for version control, and implementing robust CI/CD pipelines to accelerate deployment cycles. While preferred qualifications include 7+ years of hands-on experience in backend development with expertise in Python, proven experience leading engineering teams and delivering complex projects, strong knowledge of distributed systems, real-time data processing, and cloud computing, experience with DevOps, CI/CD, and containerized environments, familiarity with GitLab, AWS, and Linux-based cloud infrastructure, and bonus knowledge of quantitative trading, financial markets, or algorithmic trading. The ideal candidate for this position is a backend expert with a passion for building scalable, high-performance systems, enjoys leading teams, mentoring engineers, fostering a strong engineering culture, can balance hands-on coding with high-level architecture and leadership, thrives in a fast-paced, data-driven environment, and loves solving complex technical challenges.,
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough