Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
3.0 - 7.0 years
0 Lacs
karnataka
On-site
You will have the opportunity to work at Capgemini, a company that empowers you to shape your career according to your preferences. You will be part of a collaborative community of colleagues worldwide, where you can reimagine what is achievable and contribute to unlocking the value of technology for leading organizations to build a more sustainable and inclusive world. Your Role: - You should have a very good understanding of current work, tools, and technologies being used. - Comprehensive knowledge and clarity on Bigquery, ETL, GCS, Airflow/Composer, SQL, Python are required. - Experience with Fact and Dimension tables, SCD is necessary. - Minimum 3 years of experience in GCP Data Engineering is mandatory. - Proficiency in Java/ Python/ Spark on GCP, with programming experience in Python, Java, or PySpark, SQL. - Hands-on experience with GCS (Cloud Storage), Composer (Airflow), and BigQuery. - Ability to work with handling big data efficiently. Your Profile: - Strong data engineering experience using Java or Python programming languages or Spark on Google Cloud. - Experience in pipeline development using Dataflow or Dataproc (Apache Beam etc). - Familiarity with other GCP services or databases like Datastore, Bigtable, Spanner, Cloud Run, Cloud Functions, etc. - Possess proven analytical skills and a problem-solving attitude. - Excellent communication skills. What you'll love about working here: - You can shape your career with a range of career paths and internal opportunities within the Capgemini group. - Access to comprehensive wellness benefits including health checks, telemedicine, insurance with top-ups, elder care, partner coverage, or new parent support via flexible work. - Opportunity to learn on one of the industry's largest digital learning platforms with access to 250,000+ courses and numerous certifications. About Capgemini: Capgemini is a global business and technology transformation partner, helping organizations accelerate their dual transition to a digital and sustainable world while creating tangible impact for enterprises and society. With a diverse team of over 340,000 members in more than 50 countries, Capgemini leverages its over 55-year heritage to unlock the value of technology for clients across the entire breadth of their business needs. The company delivers end-to-end services and solutions, combining strengths from strategy and design to engineering, fueled by market-leading capabilities in AI, generative AI, cloud, and data, along with deep industry expertise and a strong partner ecosystem.,
Posted 2 days ago
7.0 - 12.0 years
30 - 45 Lacs
bengaluru
Work from Office
About the Role We are looking for a seasoned Engineering Manager well-versed with emerging technologies to join our team. As an Engineering Manager, you will ensure consistency and quality by shaping the right strategies. You will keep an eye on all engineering projects and ensure all duties are fulfilled . You will analyse other employees tasks and carry on collaborations effectively. You will also transform newbies into experts and build reports on the progress of all projects. What you will do Design tasks for other engineers as per Meeshos guidelines Perform regular performance evaluation and share and seek feedback Keep a closer look on various projects and monitor the progress Carry on smooth collaborations with the sales team and design teams to innovate on new products Manage engineers and take ownership of the project while ensuring product scalability Conduct regular meetings to plan and develop reports on the progress of projects What you will need Bachelor's/Masters in computer science At least 7+ years professional experience At least 2 years of experience in managing software development teams Able to drive sprints and OKRs Deep understanding of transactional and NoSQL DBs Deep understanding of Messaging systems Kafka Good experience on cloud infrastructure - AWS/GCS Good to have: Data pipelines, ES Exceptional team managing skills; experience in building large scale distributed Systems Experience in Scalable Systems Expertise in Java/Python and multithreading
Posted 5 days ago
5.0 - 9.0 years
0 Lacs
hyderabad, telangana
On-site
As a Data Quality Engineering Support Manager at McDonald's Corporation in Hyderabad, your role involves implementing, scaling, and supporting enterprise-wide data quality frameworks across cloud-native data platforms. You will drive initiatives to monitor, validate, and reconcile data for analytics, AI/ML, and operational workflows, ensuring trusted data across ingestion, processing, and consumption layers. **Key Responsibilities:** - Implement and support automated data quality checks for accuracy, completeness, timeliness, and consistency across datasets. - Develop validation frameworks for ingestion pipelines, curated layers, and reporting models in platforms like BigQuery and Redshift. - Integrate data quality controls into CI/CD pipelines and orchestration tools (e.g., Airflow, Cloud Composer). - Respond to and resolve data quality incidents and discrepancies across data domains and systems. - Collaborate with engineering and product teams to implement root cause analysis and build long-term remediation strategies. - Establish SLAs and alerting thresholds for data quality KPIs. - Deploy scalable data quality monitoring solutions across GCP (BigQuery, Cloud Storage) and AWS (Redshift, S3, Glue). - Partner with data governance teams to align quality rules with business glossary terms, reference data, and stewardship models. - Maintain playbooks, documentation, and automated reporting for quality audits and exception handling. - Collaborate with data owners, analysts, and data product teams to promote a culture of data trust and shared ownership. - Provide training and knowledge-sharing to enable self-service quality monitoring and issue triaging. **Qualifications Required:** - 5+ years of experience in data quality engineering, data operations, or data pipeline support, ideally in a cloud-first environment. - Hands-on expertise in building and managing data quality checks, SQL, Python, cloud-native data stacks (BigQuery, Redshift, GCS, S3), data quality monitoring tools or frameworks, and troubleshooting skills across distributed data systems. - Bachelor's degree in Computer Science, Information Systems, or related field. - Preferred experience in Retail or Quick Service Restaurant environments, familiarity with data governance platforms, exposure to AI/ML data pipelines, and current GCP Certification. Join McDonald's in Hyderabad, India for a full-time hybrid role where you will play a crucial part in ensuring data quality across various platforms and systems.,
Posted 5 days ago
4.0 - 8.0 years
0 Lacs
karnataka
On-site
As a Data Engineer at Walmart Global Tech, you will be responsible for architecting, designing, and implementing high-performance data ingestion and integration processes in a complex, large-scale data environment. Your role will involve developing and implementing databases, data collection systems, data analytics, and other strategies to optimize statistical efficiency and quality. You will also oversee and mentor the data engineering team's practices to ensure data privacy and security compliance. Collaboration is key in this role as you will work closely with data scientists, data analysts, and other stakeholders to understand data needs and deliver on those requirements. Additionally, you will collaborate with all business units and engineering teams to develop a long-term strategy for data platform architecture. Your responsibilities will also include developing and maintaining scalable data pipelines, building new API integrations, and monitoring data quality to ensure accurate and reliable production data. To be successful in this role, you should have a Bachelor's degree or higher in Computer Science, Engineering, Mathematics, or a related field, along with at least 12 years of proven experience in data engineering, software development, or a similar data management role. You should have strong knowledge and experience with Big Data technologies such as Hadoop, Spark, and Kafka, as well as proficiency in scripting languages like Python, Java, Scala, etc. Experience with SQL and NoSQL databases, deep understanding of data structures and algorithms, and familiarity with machine learning algorithms and principles are also preferred. Excellent communication and leadership skills are essential for this role, along with hands-on experience in data processing and manipulation. Expertise with GCP cloud and GCP data processing tools like GCS, DataProc, DPaaS, BigQuery, Hive, as well as experience with Orchestration tools like Airflow, Automic, Autosys, are highly valued. Join Walmart Global Tech, where you can make a significant impact by leveraging your expertise to innovate at scale, influence millions, and shape the future of retail. With a hybrid work environment, competitive compensation, incentive awards, and a range of benefits, you'll have the opportunity to grow your career and contribute to a culture where everyone feels valued and included. Walmart Global Tech is committed to being an Equal Opportunity Employer, fostering a workplace culture where everyone is respected and valued for their unique contributions. Join us in creating opportunities for all associates, customers, and suppliers, and help us build a more inclusive Walmart for everyone.,
Posted 6 days ago
1.0 - 6.0 years
0 Lacs
chennai, tamil nadu
On-site
As a GCP Data Engineer, you will play a crucial role in the development, optimization, and maintenance of data pipelines and infrastructure. Your proficiency in SQL and Python will be pivotal in the management and transformation of data. Moreover, your familiarity with cloud technologies will be highly beneficial as we strive to improve our data engineering processes. You will be responsible for building scalable data pipelines. This involves designing, implementing, and maintaining end-to-end data pipelines to efficiently extract, transform, and load (ETL) data from various sources. It is essential to ensure that these data pipelines are reliable, scalable, and performance-oriented. Your expertise in SQL will be put to use as you write and optimize complex SQL queries for data extraction, transformation, and reporting purposes. Collaboration with analysts and data scientists will be necessary to provide structured data for analysis. Experience with cloud platforms, particularly GCP services such as BigQuery, DataFlow, GCS, and Postgres, will be valuable. Leveraging cloud services to enhance data processing and storage capabilities, as well as integrating tools into the data ecosystem, will be part of your responsibilities. Documenting data pipelines, procedures, and best practices will be essential for knowledge sharing within the team. You will collaborate closely with cross-functional teams to understand data requirements and deliver effective solutions. The ideal candidate for this role should have at least 3 years of experience with SQL and Python, along with a minimum of 1 year of experience with GCP services like BigQuery, DataFlow, GCS, and Postgres. Additionally, 2+ years of experience in building data pipelines from scratch in a highly distributed and fault-tolerant manner is required. Comfort with a variety of relational and non-relational databases is essential. Proven experience in building applications in a data-focused role, both in Cloud and Traditional Data Warehouse environments, is preferred. Familiarity with CloudSQL, Cloud Functions, Pub/Sub, Cloud Composer, and a willingness to learn new tools and techniques are desired qualities. Furthermore, being comfortable with big data and machine learning tools and platforms, including open-source technologies like Apache Spark, Hadoop, and Kafka, will be advantageous. Strong oral, written, and interpersonal communication skills are crucial for effective collaboration in a dynamic environment with undefined problems. If you are an inquisitive, proactive individual with a passion for data engineering and a desire to continuously learn and grow, we invite you to join our team in Chennai, Tamil Nadu, India.,
Posted 6 days ago
4.0 - 9.0 years
10 - 14 Lacs
pune
Hybrid
Job Description; Technical Skills; Top skills for this positions is : Google Cloud Platform (Composer, Big Query, Airflow, DataProc, Data Flow, GCS) Data Warehousing knowledge Hands on experience in Python language and SQL database. Analytical technical skills to be able to predict the consequences of configuration changes (impact analysis), to identify root causes that are not obvious and to understand the business requirements. Excellent communication with different stakeholders (business, technical, project) Good understading of the overall Big Data and Data Science ecosystem Experience with buiding and deploying containers as services using Swarm/Kubernetes Good understanding of container concepts like buiding lean and secure images Understanding modern DevOps pipelines Experience with stream data pipelines using Kafka or Pub/Sub (mandatory for Kafka resources) Good to have: Professional Data Engineer or Associate Data Engineer Certification Roles and Responsibilities; Design, build & manage Big data ingestion and processing applications on Google Cloud using Big Query, Dataflow, Composer, Cloud Storage, Dataproc Performance tuning and analysis of Spark, Apache Beam (Dataflow) or similar distributed computing tools and applications on Google Cloud Good understanding of google cloud concepts, environments and utilities to design cloud optimal solutions for Machine Learning Applications Build systems to perform real-time data processing using Kafka, Pub-sub, Spark Streaming or similar technologies Manage the development life-cycle for agile software development projects Convert a proof of concept into an industrialization for Machine Learning Models (MLOps). Provide solutions to complex problems. Deliver customer-oriented solutions in a timely, collaborative manner Proactive thinking, planning and understanding of dependencies Develop & implement robust solutions in test & production environments.
Posted 1 week ago
5.0 - 10.0 years
20 - 30 Lacs
pune, bengaluru, mumbai (all areas)
Work from Office
Designing, deploying, and managing applications and infrastructure on Google Cloud. Responsible for maintaining more solutions that leverage Google-managed or self-managed services, utilizing both the Google Cloud Console and command-line interface Required Candidate profile Designing and Implementing Cloud Solutions: Deploying and Managing Applications: Monitoring and Maintaining Cloud Infrastructure: Utilizing Cloud Services: Automation and DevOps:
Posted 1 week ago
7.0 - 9.0 years
0 Lacs
bengaluru, karnataka, india
Remote
About Sibros Technologies Who We Are Sibros is accelerating the future of SDV excellence with its Deep Connected Platform that orchestrates full vehicle software update management, vehicle analytics, and remote commands in one integrated system. Adaptable to any vehicle architecture, Sibros platform meets stringent safety, security, and compliance standards, propelling OEMs to innovate new connected vehicle use cases across fleet management, predictive maintenance, data monetization, and beyond. Learn more at www.sibros.tech. Our Mission Our mission is to help our customers get the most value out of their connected devices. Follow us on LinkedIn | Youtube | Instagram About The Role Job Title: Senior Software Engineer Experience: 6 - 9 years At Sibros, we are building the foundational data infrastructure that powers the software-defined future of mobility. One of our most impactful products Deep Logger enables rich, scalable, and intelligent data collection from connected vehicles , unlocking insights that were previously inaccessible. Our platform ingests high-frequency telemetry , diagnostic signals, user behavior, and system health data from vehicles across the globe. We transform this into actionable intelligence through real-time analytics , geofence-driven alerting , and predictive modeling for use cases like trip intelligence, fault detection, battery health , and driver safety . Were looking for a Senior Software Engineer to help scale the backend systems that support Deep Loggers data pipelinefrom ingestion and streaming analytics to long-term storage and ML model integration . Youll play a key role in designing high-throughput, low-latency systems that operate reliably in production, even as data volumes scale to billions of events per day. In this role, youll collaborate across firmware, data science, and product teams to deliver solutions that are not only technically robust, but also critical to safety, compliance, and business intelligence for OEMs and fleet operators. This is a unique opportunity to shape the real-time intelligence layer of connected vehicles , working at the intersection of event-driven systems, cloud-native infrastructure , and automotive-grade reliability . What Youll Do Lead the Design and Evolution of Scalable Data Systems: Architect end-to-end real-time and batch data processing pipelines that power mission-critical applications such as trip intelligence, predictive diagnostics, and geofence-based alerts. Drive system-level design decisions and guide the team through technology tradeoffs. Mentor and Uplift the Engineering Team: Act as a technical mentor to junior and mid-level engineers. Conduct design reviews, help grow data engineering best practices, and champion engineering excellence across the team. Partner Across the Stack and the Org: Collaborate cross-functionally with firmware, frontend, product, and data science teams to align on roadmap goals. Translate ambiguous business requirements into scalable, fault-tolerant data systems with high availability and performance guarantees. Drive Innovation and Product Impact: Shape the technical vision for real-time and near-real-time data applications. Identify and introduce cutting-edge open-source or cloud-native tools that improve system reliability, observability, and cost efficiency. Operationalize Systems at Scale: Own the reliability, scalability, and performance of the pipelines you and the team build. Lead incident postmortems, drive long-term stability improvements, and establish SLAs/SLOs that balance customer value with engineering complexity. Contribute to Strategic Technical Direction: Provide thought leadership on evolving architectural patterns, such as transitioning from streaming-first to hybrid batch-stream systems for cost and scale efficiency. Proactively identify bottlenecks, tech debt, and scalability risks. What You Should Know 7+ years of experience in software engineering with a strong emphasis on building and scaling distributed systems in production environments. Deep understanding of computer science fundamentals including data structures, algorithms, concurrency, and distributed computing principles. Proven expertise in designing, building, and maintaining large-scale, low-latency data systems for real-time and batch processing. Hands-on experience with event-driven architectures and messaging systems like Apache Kafka, Pub/Sub, or equivalent technologies. Strong proficiency in stream processing frameworks such as Apache Beam, Flink, or Google Cloud Dataflow, with a deep appreciation for time and windowing semantics, backpressure, and checkpointing. Demonstrated ability to write production-grade code in Go or Java, following clean architecture principles and best practices in software design. Solid experience with cloud-native infrastructure including Kubernetes, serverless compute (e.g., AWS Lambda, GCP Cloud Functions), and containerized deployments using CI/CD pipelines. Proficiency with cloud platforms, especially Google Cloud Platform (GCP) or Amazon Web Services (AWS), and services like BigQuery, S3/GCS, IAM, and managed Kubernetes (GKE/EKS). Familiarity with observability stacks (e.g., Prometheus, Grafana, OpenTelemetry) and an understanding of operational excellence in production environments. Ability to balance pragmatism with technical rigor, navigating ambiguity to design scalable and cost-effective solutions. Passionate about building platforms that empower internal teams and deliver meaningful insights to customers, especially within the automotive, mobility, or IoT domains. Strong communication and collaboration skills, with experience working closely across product, firmware, and analytics teams. Preferred Qualifications Experience architecting and building systems for large-scale IoT or telemetry-driven applications, including ingestion, enrichment, storage, and real-time analytics. Deep expertise in both streaming and batch data processing paradigms, using tools such as Apache Kafka, Apache Flink, Apache Beam, or Google Cloud Dataflow. Hands-on experience with cloud-native architectures on platforms like Google Cloud Platform (GCP), AWS, or Azure, leveraging services such as Pub/Sub, BigQuery, Cloud Functions, Kinesis etc. Experience working with high-performance time-series or analytical databases such as ClickHouse, Apache Druid, or InfluxDB, optimized for millisecond-level insights at scale. Proven ability to design resilient, fault-tolerant pipelines that ensure data quality, integrity, and observability in high-throughput environments. Familiarity with schema evolution, data contracts, and streaming-first data architecture patterns (e.g., Change Data Capture, event sourcing). Experience working with geospatial data, telemetry, or real-time alerting systems is a strong plus. Contributions to open-source projects in the data or infrastructure ecosystem, or active participation in relevant communities, are valued. What We Offer Competitive compensation package and benefits. A dynamic work environment with a flat hierarchy and the opportunity for rapid career advancement. Collaborate with a dynamic team thats passionate about solving complex problems in the automotive IoT space. Access to continuous learning and development opportunities. Flexible working hours to accommodate different time zones. Comprehensive benefits package including health insurance and wellness programs. A culture that values innovation and promotes a work-life balance. Show more Show less
Posted 1 week ago
5.0 - 10.0 years
14 - 24 Lacs
hyderabad, bengaluru
Work from Office
Role: Cloud Platforms & Infrastructure Location: Bangalore/Hyderabad Position: Fulltime Cloud Platforms & Infrastructure High Performance Computing ; HPC, GCE (Google Compute Engine), GCS (Google Cloud Storage), HPC storage, Dynamic Workload Scheduler for GPU/GCE, and IaC/Terraform
Posted 1 week ago
3.0 - 6.0 years
0 Lacs
pune, maharashtra, india
On-site
Job description Some careers shine brighter than others. If you're looking for a career that will help you stand out, join HSBC and fulfil your potential. Whether you want a career that could take you to the top, or simply take you in an exciting new direction, HSBC offers opportunities, support and rewards that will take you further. HSBC is one of the largest banking and financial services organisations in the world, with operations in 64 countries and territories. We aim to be where the growth is, enabling businesses to thrive and economies to prosper, and, ultimately, helping people to fulfil their hopes and realise their ambitions. We are currently seeking an experienced professional to join our team in the role of Senior Software Engineer In this role, you will: Migrate and re-engineer existing services from on-premises data centers to Cloud (GCP/AWS) Understanding the business requirements and provide the real-time solutions Following the project development tools like JIRA, Confluence and GIT Write python/shell scripts to automate operations and server management Build and maintain operations tools for monitoring, notifications, trending, and analysis. Define, create, test, and execute operations procedures. Document current and future configuration processes and policies Requirements To be successful in this role, you should meet the following requirements: Experience 3 to 6 years Hadoop knowledge, NiFi/Kafka experience Python/Java at intermediate level Must have good Experience/knowledge on GCP components like GCS, BigQuery, AirFlow, Cloud SQL, PubSub/Kafka, DataFlow and Google Cloud SDK Experience in Java, Spring Boot framework and JPA/Hibernate will be preferable Understanding of Terraform script, Shell script Should have experience on any of the RDBMS GCP Data Engineer certifications is an added advantage Knowledge in Data Warehouse / ETL and BigData Technologies like Hive, Spark, NiFi Flexible to work on Linux/Unix environment for handling support & execution activities Good understanding on DWH/Data ingestion/Data Engineering concepts Good to have knowledge on Jenkins, Ansible, git, CI/CD Flexible to adopt new technologies/skills Good to have knowledge on GCP, Scheduling tools like (Control-m, TWS) You'll achieve more when you join HSBC. www.hsbc.com/careers HSBC is committed to building a culture where all employees are valued, respected and opinions count. We take pride in providing a workplace that fosters continuous professional development, flexible working and opportunities to grow within an inclusive and diverse environment. Personal data held by the Bank relating to employment applications will be used in accordance with our Privacy Statement, which is available on our website. Issued by - HSBC Software Development India
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
chennai, tamil nadu
On-site
As a qualified candidate for this role, you should have in-depth expertise in Google Cloud Platform (GCP) services such as Pubsub, BigQuery, Airflow, Data Proc, Cloud Composer, and Google Cloud Storage (GCS). Additionally, proficiency in DataFlow and Java is a must for this position. Experience with Kafka would be considered a plus. Your responsibilities will include working with these technologies to design, develop, and maintain scalable and efficient data processing systems. If you meet these requirements and are eager to work in a dynamic and innovative environment, we look forward to reviewing your application.,
Posted 1 week ago
7.0 - 11.0 years
0 Lacs
chennai, tamil nadu
On-site
As a Java Backend Developer based in Chennai, you will be responsible for leveraging your 7-10 years of experience to contribute to our team. Your primary focus will be on Java Spring Boot, where you will utilize your expertise in backend APIs, DB interactions, Kafka, performance tuning, and JUnit test cases. Additionally, your proficiency in NoSQL databases such as CosmosDB and Cassandra will be essential as you work on async calls, DB optimization, and ensuring seamless operations. Experience with Kubernetes and Docker will enable you to excel in cloud deployments and containerization tasks. Hands-on experience with Kafka for producing, consuming, and scaling will be a key aspect of your role, while familiarity with GCP Cloud services like GCS, BigQuery, DataProc clusters, and Cloud SDK will be advantageous. This full-time onsite position in Chennai offers you the opportunity to apply your skills in a dynamic environment. If you are ready to take on this challenge, we encourage you to apply by sharing your CV at tanisha.g@magnify360solutions.com or contacting us at 9823319238. Join us in shaping the future of software engineering and technology as part of our innovative team. #JavaDeveloper #BackendDeveloper #SpringBoot #Kafka #NoSQL #CosmosDB #Cassandra #Kubernetes #Docker #GCP #CloudComputing #Microservices #ChennaiJobs #ITJobs #NowHiring #SoftwareEngineering #TechJobs,
Posted 2 weeks ago
6.0 - 10.0 years
0 Lacs
noida, uttar pradesh
On-site
As a Principal Software Engineer, you will play a vital role in the design, development, and deployment of advanced AI and generative AI-based products. Your main responsibilities will include driving technical innovation, leading complex projects, and working closely with cross-functional teams to deliver high-quality, scalable, and maintainable solutions. To excel in this role, you must possess a strong background in software development, AI/ML techniques, and DevOps practices. Additionally, mentoring junior engineers and contributing to strategic technical decisions are key aspects of this position. Your primary responsibilities will involve advanced software development, where you will be responsible for designing, developing, and optimizing high-quality code for complex software applications and systems. It will be crucial to maintain high standards of performance, scalability, and maintainability while driving best practices in code quality, documentation, and test coverage. Furthermore, you will lead the end-to-end development of generative AI solutions, from data collection and model training to deployment and optimization. Experimenting with cutting-edge generative AI techniques to enhance product capabilities and performance will be a key part of your role. As a technical leader, you will take ownership of architecture and technical decisions for AI/ML projects. You will mentor junior engineers, review code for adherence to best practices, and ensure that the team maintains a high standard of technical excellence. Project ownership will also be a significant part of your responsibilities, where you will lead the execution and delivery of features, manage project scope, timelines, and priorities in collaboration with product managers, and proactively identify and mitigate risks. You will contribute to the architectural design and planning of new features, ensuring that solutions are scalable, reliable, and maintainable. Engaging in technical reviews with peers and stakeholders to promote a product suite mindset will be essential. Conducting rigorous code reviews to ensure adherence to industry best practices, maintainingability, and performance optimization, and providing feedback that supports team growth and technical improvement will also be part of your role. In addition, you will design and implement robust test suites to ensure code quality and system reliability. Advocating for test automation and the use of CI/CD pipelines to streamline testing processes and maintain service health will be critical. You will also be responsible for monitoring and maintaining the health of deployed services, utilizing telemetry and performance indicators to proactively address potential issues, performing root cause analysis for incidents, and driving preventive measures for improved system reliability. Taking end-to-end responsibility for features and services in a DevOps model to deploy and manage software in production will be part of your role. Ensuring efficient incident response and maintaining a high level of service availability are key components of this responsibility. You will also be required to create and maintain thorough documentation for code, processes, and technical decisions and contribute to knowledge sharing within the team to enable continuous learning and improvement. To qualify for this position, you should have a Bachelor's degree in Computer Science, Engineering, or a related technical field, with a Master's degree preferred. You should also have at least 6 years of professional software development experience, including significant experience with AI/ML or GenAI applications. Demonstrated expertise in building scalable, production-grade software solutions is essential. Advanced proficiency in Python, FastAPI, PyTest, Celery, and other Python frameworks, along with deep knowledge of software design patterns, object-oriented programming, and concurrency, is required. Extensive experience with cloud technologies (e.g., GCP, AWS, Azure), containerization (e.g., Docker, Kubernetes), CI/CD practices, version control systems (e.g., GitHub), and work tracking tools (e.g., JIRA) is also necessary. Familiarity with GenAI frameworks (e.g., LangChain, LangGraph), MLOps, AI lifecycle management, and experience with model deployment and monitoring in cloud environments is preferred. Additionally, hands-on experience with advanced ML algorithms, including generative models, NLP, and transformers, and knowledge of industry-standard AI frameworks (e.g., TensorFlow, PyTorch) are advantageous. Proficiency with relational and NoSQL databases (e.g., MongoDB, MSSQL, PostgreSQL), analytics platforms (e.g., BigQuery, Snowflake, Tableau), and messaging systems (e.g., Kafka) is a plus. Experience with test automation tools (e.g., PyTest, xUnit) and CI/CD tooling such as Terraform and GitHub Actions, with a strong emphasis on building resilient and testable software, is also beneficial. Proficiency with GCP technologies such as VertexAI, BigQuery, GKE, GCS, and DataFlow, focusing on deploying AI models at scale, is an advantage. In conclusion, as a Principal Software Engineer at our organization, you will play a critical role in driving technical innovation, leading complex projects, and collaborating with cross-functional teams to deliver high-quality, scalable, and maintainable AI and generative AI-based products. Your expertise in software development, AI/ML techniques, and DevOps practices, along with your ability to mentor junior engineers and contribute to strategic technical decisions, will be instrumental in your success in this role.,
Posted 2 weeks ago
8.0 - 12.0 years
0 Lacs
chennai, tamil nadu
On-site
As a Tech Anchor/Solution Architect within the Industrial System Analytics (ISA) team, you will play a crucial role in providing technical leadership and guidance to the development team, ensuring the design and implementation of cloud analytic solutions using GCP tools and techniques. Your responsibilities will include offering technical guidance, mentorship, and code-level support to the team, as well as collaborating to develop and implement solutions using GCP tools and APIs/Microservices. You will be accountable for driving adherence to security, legal, and Ford standard/policy compliance while focusing on efficient delivery and identifying and mitigating risks. In this role, you will lead the design and architecture of complex systems, emphasizing scalability, reliability, and performance. Your involvement will extend to participating in code reviews, improving code quality, and championing Agile software processes, best practices, and techniques. Additionally, you will oversee the onboarding of new resources, assess product health, and make key decisions. It is essential to ensure the effective usage of Rally for deriving meaningful insights and implementing DevSecOps and software craftsmanship practices. Good to have skills include experience with GCP services like Cloud Run, Cloud Build, Cloud Source Repositories, and Cloud Workflows. Knowledge of containerization using Docker and Kubernetes, familiarity with serverless architecture, event-driven design patterns, machine learning, data science concepts, data engineering, and data warehousing will be advantageous. Holding a certification in GCP or other cloud platforms is also beneficial. Soft skills such as strong communication, collaboration, ability to work in a fast-paced, agile environment, and a proactive attitude are highly valued for success in this position. The technical requirements include a Bachelors/Masters/PhD in engineering, Computer Science, or a related field, along with senior-level experience (8+ years) as a software engineer. Deep and broad knowledge of programming languages, front-end and back-end technologies, cloud technologies, deployment practices, Agile software development methodologies, CI/CD, and Test-Driven Development is necessary for this role. Understanding/exposure to CI, CD, and tools like GitHub, Terraform/Tekton, 42Crunch, SonarQube, FOSSA, and Checkmarx is also expected.,
Posted 2 weeks ago
3.0 - 7.0 years
0 Lacs
karnataka
On-site
Demonstrates up-to-date expertise and applies this to the development, execution, and improvement of action plans by providing expert advice and guidance to others in the application of information and best practices; supporting and aligning efforts to meet customer and business needs; and building commitment for perspectives and rationales. Provides and supports the implementation of business solutions by building relationships and partnerships with key stakeholders; identifying business needs; determining and carrying out necessary processes and practices; monitoring progress and results; recognizing and capitalizing on improvement opportunities; and adapting to competing demands, organizational changes, and new responsibilities. Models compliance with company policies and procedures and supports company mission, values, and standards of ethics and integrity by incorporating these into the development and implementation of business plans; using the Open Door Policy; and demonstrating and assisting others with how to apply these in executing business processes and practices. **Team and Position Summary:** This role is on our Marketplace Seller Acquisition and Onboarding team where we own the Marketplace Engineering team is at the forefront of building core platforms and services to enable Walmart to deliver vast selection at competitive prices and with best-in-class seller onboarding experience by enabling third-party sellers to list, sell and manage their products to our customers on walmart.com. We do this by managing the entire seller lifecycle, monitoring customer experience, and delivering high-value insights to our sellers to help them plan their assortment, price, inventory. The team also actively collaborates with partner platform teams to ensure we continue to deliver the best experience to our sellers and our customers. This role will be focused on the Marketplace. **Position Responsibilities:** You want a challenge Come join a team that is merging digital and physical, building real-time systems at scale, responding quicker to changes. You'll sweep us off our feet if you: - Act as the Senior Software Engineer for the team, taking ownership of technical projects and solutions. - Lead by example, setting technical standards and driving overall technical architecture and design. - Mentor junior developers, enhancing their skills and understanding of Android development. - Desire to keep up with technology trends. - Encouraging others to grow and be curious. - Provide technical leadership in every stage of the development process, from design to deployment, ensuring adherence to best practices. - Have the desire to learn. - Maintain and improve application performance, compatibility, and responsiveness across various devices and Android versions. - Drive for engineering and operational excellence, delivering high-quality solutions and processes. **You'll make an impact by:** As a Senior Software Engineer for Walmart, you'll have the opportunity to Apply and/or develop CRM solutions to develop efficient and scalable models at Walmart scale. Through this role, you have an opportunity to develop intuitive software that meets and exceeds the needs of the customer and the company. You also get to collaborate with team members to develop best practices and requirements for the software. In this role, it would be important for you to professionally maintain all codes and create updates regularly to address the customers and company's concerns. You will show your skills in analyzing and testing programs/products before formal launch to ensure flawless performance. Troubleshooting coding problems quickly and efficiently will offer you a chance to grow your skills in a high-pace, high-impact environment. Software security is of prime importance and by developing programs that monitor sharing of private information, you will be able to add tremendous credibility to your work. You will also be required to seek ways to improve the software and its effectiveness. Adhere to Company policies, procedures, mission, values, and standards of ethics and integrity. **Position Requirements:** **Preferred skills:** Agentforce, Salesforce-managed models Gen AI, LLMs **Minimum qualifications:** - 3-5 years of Software Engineer experience with Salesforce.com platform knowledge and good to have knowledge on Salesforce-managed models Gen AI, LLMs - Proven working experience as a CRM Data Engineer with a minimum of 3 years in the field. - Strong programming skills in Scala and experience with Spark for data processing and analytics. - Familiarity with Google Cloud Platform (GCP) services such as Big Query, GCS, Dataproc, Pub/Sub, etc. - Experience with data modeling, data integration, and ETL processes. - Excellent working experience in Salesforce with Apex, Visual Force, Lightning, and Force.com. - Strong Knowledge of Sales cloud, Service cloud, experience cloud (community cloud). - Experience in Application customization and development, including Lightning pages, Lightning Web Components, Aura Components, Apex (classes and triggers). - Experience in integration of Salesforce with other systems using Salesforce APIs, SOAP, Rest API etc. - Proficient with Microsoft Visual Studio, Salesforce Lightning Design System, and the Salesforce development lifecycle. - Knowledge of Tools such as Data-Loader, ANT, Workbench, GIT, Bit-Bucket Version Control. - Knowledge of deployment activities (CI/CD) between the Salesforce environments. - Knowledge of high-quality professional software engineering practices for agile software development cycle, including coding standards, code reviews, source control management, build processes, testing, and deployment. - Fundamental knowledge of design patterns. - Experience in communicating effectively with users, other technical teams, and management to collect requirements, describe software product features, and technical designs. - Mentoring the team members to meet the clients" needs and holding them accountable for high standards of delivery. - Being able to understand and relate technology integration scenarios and be able to apply these learnings in complex troubleshooting scenarios. **RESPONSIBILITIES:** - Writing and reviewing great quality code. - Understanding functional requirements thoroughly and analyzing the clients" needs in the context of the project. - Envisioning the overall solution for defined functional and non-functional requirements, and being able to define technologies, patterns, and frameworks to realize it. - Determining and implementing design methodologies and tool sets. - Enabling application development by coordinating requirements, schedules, and activities. - Being able to lead/support UAT and production rollouts. - Creating, understanding, and validating WBS and estimated effort for a given module/task, and being able to justify it. - Addressing issues promptly, responding positively to setbacks and challenges with a mindset of continuous improvement. - Giving constructive feedback to the team members and setting clear expectations. - Helping the team in troubleshooting and resolving complex bugs. - Coming up with solutions to any issue that is raised during code/design review and being able to justify the decision taken. - Carrying out POCs to make sure that suggested design/technologies meet the requirements. **About Walmart Global Tech:** Imagine working in an environment where one line of code can make life easier for hundreds of millions of people. That's what we do at Walmart Global Tech. We're a team of software engineers, data scientists, cybersecurity experts, and service professionals within the world's leading retailer who make an epic impact and are at the forefront of the next retail disruption. People are why we innovate, and people power our innovations. We are people-led and tech-empowered. We train our team in the skillsets of the future and bring in experts like you to help us grow. We have roles for those chasing their first opportunity as well as those looking for the opportunity that will define their career. Here, you can kickstart a great career in tech, gain new skills and experience for virtually every industry, or leverage your expertise to innovate at scale, impact millions and reimagine the future of retail. **Flexible, hybrid work:** We use a hybrid way of working with primary in-office presence coupled with an optimal mix of virtual presence. We use our campuses to collaborate and be together in person, as business needs require and for development and networking opportunities. This approach helps us make quicker decisions, remove location barriers across our global team, be more flexible in our personal lives. **Benefits:** Beyond our great compensation package, you can receive incentive awards for your performance. Other great perks include a host of best-in-class benefits maternity and parental leave, PTO, health benefits, and much more. **Equal Opportunity Employer:** Walmart, Inc. is an Equal Opportunity Employer By Choice. We believe we are best equipped to help our associates, customers and the communities we serve live better when we really know them. That means understanding, respecting and valuing unique styles, experiences, identities, ideas and opinions while being respectful of all people.,
Posted 2 weeks ago
3.0 - 7.0 years
0 Lacs
chennai, tamil nadu
On-site
The ideal candidate for this position should have 3+ years of experience in full stack software development, along with expertise in Cloud technologies & services, preferably GCP. In addition, the candidate should possess at least 3 years of experience in practicing statistical methods such as ANOVA and principal component analysis. Proficiency in Python, SQL, and BQ is a must. The candidate should also have experience in tools like SonarQube, CICD, Tekton, terraform, GCS, GCP Looker, Google Cloud Build, Cloud Run, Vertex AI, Airflow, TensorFlow, etc. Experience in training, building, and deploying ML and DL models is an essential requirement for this role. Familiarity with HuggingFace, Chainlit, Streamlit, and React would be an added advantage. The position is based in Chennai and Bangalore. A minimum of 3 to 5 years of relevant experience is required for this role.,
Posted 2 weeks ago
3.0 - 7.0 years
0 Lacs
maharashtra
On-site
Are you ready to contribute to Mondelz International's mission of leading the future of snacking with pride As a member of the analytics team, you will play a vital role in supporting the business by developing data models that uncover trends essential for driving long-term business results. In this role, you will work closely with analytics team leaders to execute the business analytics agenda, collaborate with external partners adept in leveraging analytics tools and processes, and utilize models/algorithms to identify signals, patterns, and trends that can enhance long-term business performance. Your methodical approach to executing the business analytics agenda will effectively communicate the value of analytics to stakeholders. To excel in this position, you should possess a strong desire to propel your career forward and have experience in utilizing data analysis to provide recommendations to analytic leaders. Familiarity with best-in-class analytics practices, Key Performance Indicators (KPIs), and BI tools such as Tableau, Excel, Alteryx, R, and Python will be advantageous. As a DaaS Data Engineer at Mondelz International, you will have the exciting opportunity to design and construct scalable, secure, and cost-effective cloud-based data solutions. Your responsibilities will include developing and maintaining data pipelines, ensuring data quality, optimizing data storage, and collaborating with various teams and stakeholders to stay abreast of the latest cloud technologies and best practices. Key responsibilities of this role involve designing and implementing cloud-based data solutions, managing data pipelines for data extraction and transformation, maintaining data quality and validation processes, optimizing data storage for efficiency, and fostering collaboration with data teams and product owners to drive innovation. In terms of technical requirements, proficiency in programming languages like Python, PySpark, and Go/Java, database management skills in SQL and PL/SQL, expertise in ETL & Integration tools, data warehousing concepts, visualization tools, and cloud services such as GCP and AWS is essential. Experience with supporting technologies and soft skills like problem-solving, communication, analytical thinking, attention to detail, and adaptability will further enhance your effectiveness in this role. Mondelz International offers within country relocation support and minimal assistance for candidates considering international relocation through the Volunteer International Transfer Policy. Join us in empowering people to snack right with our diverse portfolio of globally recognized brands and be a part of a dynamic community that is driving growth and living our purpose and values.,
Posted 2 weeks ago
6.0 - 10.0 years
0 Lacs
karnataka
On-site
Candidates for this position are preferred to be based in Bangalore, India and will be expected to comply with their team's hybrid work schedule requirements. Wayfair's Advertising business is rapidly expanding, adding hundreds of millions of dollars in profits. The team is building Sponsored Products, Display & Video Ad offerings that cater to various Advertiser goals while delivering highly relevant and engaging Ads to millions of customers. The Ads Platform is being evolved to empower advertisers of all sophistication levels to grow their business on Wayfair with a strong, positive ROI by utilizing state-of-the-art Machine Learning techniques. The Advertising Optimization & Automation Science team plays a central role in this effort. The team leverages machine learning and generative AI to streamline campaign workflows, providing impactful recommendations on budget allocation, target Return on Ad Spend (tROAS), and SKU selection. Additionally, they are developing intelligent systems for creative optimization and exploring agentic frameworks to simplify and enhance advertiser interactions. An experienced ML Scientist III is sought to join the Advertising Optimization & Automation Science team. In this role, you will be responsible for developing budget, tROAS, and SKU recommendations along with other machine learning capabilities that support the ads business. You will collaborate closely with other scientists, as well as members of internal Product and Engineering teams, to apply your engineering and machine learning skills to solve impactful and intellectually challenging problems that directly influence Wayfair's revenue. **What you'll do:** - Provide technical leadership in developing an automated and intelligent advertising system by advancing machine learning techniques to offer recommendations for Ads campaigns and other optimizations. - Design, build, deploy, and refine scalable, real-world platforms that optimize the ads experience. - Collaborate with commercial stakeholders to understand business problems or opportunities and develop machine learning solutions accordingly. - Work closely with engineering, infrastructure, and machine learning platform teams to ensure best practices in building and deploying scalable machine learning services. - Identify new opportunities and insights from data, improving models, and projecting the ROI of proposed modifications. - Stay updated on new developments in advertising, sort and recommendations research, and incorporate them into internal packages and systems. - Maintain a customer-centric approach in solving problems. **We Are a Match Because You Have:** - Bachelor's or Master's degree in Computer Science, Mathematics, Statistics, or related field. - 6-9 years of industry experience in advanced machine learning and statistical modeling, including building production models at scale. - Strong theoretical understanding of statistical models and machine learning algorithms. - Familiarity with machine learning model development frameworks, orchestration, and pipelines. - Proficiency in Python or another high-level programming language. - Hands-on experience deploying machine learning solutions into production. - Strong communication skills and a bias towards simplicity. **Nice to have:** - Familiarity with Machine Learning platforms offered by Google Cloud. - Experience in computational advertising, bidding algorithms, or search ranking. - Experience with deep learning frameworks like PyTorch, Tensorflow, etc.,
Posted 2 weeks ago
8.0 - 10.0 years
0 Lacs
chennai, tamil nadu, india
On-site
JOB DESCRIPTION GDIA Mission and Scope- The Global Data Insights and Analytics (GDI&A) department at Ford Motor Company is looking for qualified people who can develop scalable solutions to complex real-world problems using Machine Learning, Big Data, Statistics, Econometrics, and Optimization. The goal of GDI&A is to drive evidence-based decision making by providing insights from data. Applications for GDI&A include, but are not limited to, Connected Vehicle, Smart Mobility, Advanced Operations, Manufacturing, Supply chain, Logistics, and Warranty Analytics. We are seeking a highly technical and experienced individual to fill the role of Tech Anchor/Solution Architect within our Industrial System Analytics (ISA) team. As a Tech Anchor, you will provide technical leadership and guidance to the development team, driving the design and implementation of cloud analytic solutions using GCP tools and techniques. RESPONSIBILITIES Key Roles and Responsibilities of Position: Provide technical guidance, mentorship, and code-level support to the development team Work with the team to develop and implement solutions using GCP tools (BigQuery, GCS, Dataflow, Dataproc, etc.) and APIs/Microservices Ensure adherence to security, legal, and Ford standard/policy compliance Drive effective and efficient delivery from the team, focusing on speed Identify risks and implement mitigation/contingency plans Assess the overall health of the product and raise key decisions Manage onboarding of new resources Lead the design and architecture of complex systems, ensuring scalability, reliability, and performance Participate in code reviews and contribute to improving code quality Champion Agile software processes, culture, best practices, and techniques Ensure effective usage of Rally and derive meaningful insights Ensure implementation of DevSecOps and software craftsmanship practices (CI/CD, TDD, Pair Programming) QUALIFICATIONS Qualifications: Bachelor's /Master's/ PhD in engineering, Computer Science, or in a related field Senior-level experience (8+ years) as a software engineer Deep and broad knowledge of: Programming Languages: Java, JavaScript, Python, SQL Front-End Technologies: React, Angular, HTML, CSS Back-End Technologies: Node.js, Python frameworks (Django, Flask), Java frameworks (Spring) Cloud Technologies: GCP (BigQuery, GCS, Dataflow, Dataproc, etc.) Deployment Practices: Docker, Kubernetes, CI/CD pipelines Experience with Agile software development methodologies Understanding/Exposure of: CI, CD, and Test-Driven Development (GitHub, Terraform/Tekton, 42Crunch, SonarQube, FOSSA, Checkmarx etc.)
Posted 2 weeks ago
5.0 - 7.0 years
0 Lacs
bengaluru, karnataka, india
On-site
Req ID: 316015 NTT DATA strives to hire exceptional, innovative and passionate individuals who want to grow with us. If you want to be part of an inclusive, adaptable, and forward-thinking organization, apply now. We are currently seeking a Sr. Zabbix Administrator to join our team in Bangalore, Karn?taka (IN-KA), India (IN). Zabbix Administration and Support Roles and responsibilities - In-depth knowledge of Enterprise Monitoring tool architecture, administration, and configuration. Technically manage the design and implementation of Zabbix tool. Hands on experience of end-to-end deployment. In-depth Knowledge of Systems Mgmt., Monitoring Tools, ITIL process, Integrations with different tools & scripting. Good understanding of Automation & Enterprise-wide monitoring tooling solutions. Hands on experience in integrating Enterprise Monitoring tools with ITSM platforms. Minimum (5) years hands on experience in administering and configuring Enterprise Monitoring tools at an L3 level. Having knowledge of IT Infra Programming/Scripting (Shell Jason, MySQL Python Perl) Good understanding of OS (Windows and Unix). Must have good knowledge of public cloud platforms (Azure, AWS, GCS) Install and configure software and hardware. Apply Zabbix patches and upgrades once available to upgrade the environment. Lead troubleshooting of issues and outages. Provide technical support as requested, for internal and external customers primarily for Zabbix Undertake individual assignments or work on a project as part of a larger team analyzing customer requirements, gathering and analyzing data and recommending solutions. Ensure assignments are undertaken consistently and with quality. Produce and update assignment documentation as required. Experienced in customer interaction. Good communication skills (verbal/written). Experienced in dealing with internal and external stake holders independently during transitions and project driven activities. Willing to work in 24 . 7 work environment. About NTT DATA NTT DATA is a $30 billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long term success. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies. Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure and connectivity. We are one of the leading providers of digital and AI infrastructure in the world. NTT DATA is a part of NTT Group, which invests over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. Visit us at NTT DATA endeavors to make accessible to any and all users. If you would like to contact us regarding the accessibility of our website or need assistance completing the application process, please contact us at . This contact information is for accommodation requests only and cannot be used to inquire about the status of applications. NTT DATA is an equal opportunity employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or protected veteran status. For our EEO Policy Statement, please click . If you'd like more information on your EEO rights under the law, please click . For Pay Transparency information, please click.
Posted 2 weeks ago
4.0 - 6.0 years
0 Lacs
pune, maharashtra, india
On-site
Req ID: 327318 NTT DATA strives to hire exceptional, innovative and passionate individuals who want to grow with us. If you want to be part of an inclusive, adaptable, and forward-thinking organization, apply now. We are currently seeking a GCP & GKE - Sr Cloud Engineer to join our team in Pune, Mah?r?shtra (IN-MH), India (IN). Job Title / Role: GCP & GKE - Sr Cloud Engineer Job Description: Primary Skill: Cloud-Infrastructure-Google Cloud Platform Minimum work experience: 4+ yrs Total Experience: 4+ Years Mandatory Skills: Technical Qualification/ Knowledge: Expertise in assessment, designing and implementing GCP solutions including aspects like compute, network, storage, identity, security , DR/business continuity strategy, migration , templates , cost optimization, PowerShell ,Terraforms, Ansible etc.. Must have GCP Solution Architect Certification Should have prior experience in executing large complex cloud transformation programs including discovery, assessment, business case creation, design , build , migration planning and migration execution Should have prior experience in using industry leading or native discovery, assessment and migration tools. Good knowledge on the cloud technology, different patterns, deployment methods, compatibility of the applications Good knowledge on the GCP technologies and associated components and variations Anthos Application Platform . Working knowledge on GCE, GAE, GKE and GCS . Hands-on experience in creating and provisioning compute Instances using GCP console, Terraform and Google Cloud SDK. . Creating Databases in GCP and in VM's . Knowledge of data analyst tool (big query). . Knowledge of cost analysis and cost optimization. . Knowledge of Git & GitHub. . Knowledge on Terraform and Jenkins. . Monitoring the VM and Applications using Stack driver. . Working knowledge on VPN and Interconnect setup. . Hands on experience in setting up HA environment. . Hands on experience in Creating VM instances in Google cloud Platform. . Hands on experience in Cloud storage and retention policies in storage. . Managing Users on Google IAM Service and providing them appropriate permissions. . GKE . Install Tools - Set up Kubernetes tools . Administer a Cluster . Configure Pods and Containers . Perform common configuration tasks for Pods and containers. . Monitoring, Logging, and Debugging . Inject Data Into Applications . Specify configuration and other data for the Pods that run your workload. . Run Applications . Run and manage both stateless and stateful applications. . Run Jobs . Run Jobs using parallel processing. . Access Applications in a Cluster . Extend Kubernetes . Understand advanced ways to adapt your Kubernetes cluster to the needs of your work environment. . Manage Cluster Daemons . Perform common tasks for managing a DaemonSet, such as performing a rolling update. . Extend kubectl with plugins . Extend kubectl by creating and installing kubectl plugins. . Manage HugePages . Configure and manage huge pages as a schedulable resource in a cluster. . Schedule GPUs . Configure and schedule GPUs for use as a resource by nodes in a cluster. Certification: GCP Engineer & GKE Academic Qualification:B. Tech or equivalentor MCA Process/ Quality Knowledge: Must have clear knowledge on ITIL based service delivery. ITIL certification is desired. Knowledge on quality Knowledge on security processes Soft Skills: Good communication skill and capability to work directly with global customers Timely and accurate communication Need to demonstrate the ownership for the technical issues and engage the right stakeholders for timely resolution. Flexibility to learn and lead other technology areas like other public cloud technologies, private cloud, automation About NTT DATA NTT DATA is a $30 billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long term success. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies. Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure and connectivity. We are one of the leading providers of digital and AI infrastructure in the world. NTT DATA is a part of NTT Group, which invests over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. Visit us at NTT DATA endeavors to make accessible to any and all users. If you would like to contact us regarding the accessibility of our website or need assistance completing the application process, please contact us at . This contact information is for accommodation requests only and cannot be used to inquire about the status of applications. NTT DATA is an equal opportunity employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or protected veteran status. For our EEO Policy Statement, please click . If you'd like more information on your EEO rights under the law, please click . For Pay Transparency information, please click.
Posted 2 weeks ago
5.0 - 9.0 years
0 Lacs
chennai, tamil nadu
On-site
As an Architect Consultant at TekWissen, a global workforce management provider, you will play a crucial role in providing technical leadership and architectural strategy for enterprise-scale data, analytics, and cloud initiatives. Your responsibilities will include partnering with business and product teams to design scalable, secure, and high-performing solutions that align with enterprise architecture standards and business goals. Additionally, you will assist GDIA teams in architecting new and existing applications using Cloud architecture patterns and processes. In this role based in Chennai, you will collaborate with product teams to define, assemble, and integrate components according to client standards and business requirements. You will support the product team in developing technical designs and documentation, participate in proof of concepts, and contribute to the product solution evaluation processes. Your expertise will be crucial in providing architecture guidance, technical design leadership, and demonstrating the ability to work on multiple projects simultaneously. The required skills for this position include proficiency in GCP, Cloud Architecture, API, Enterprise Architecture, Solution Architecture, CI-CD, and Data/Analytics. Preferred skills include experience with Big Query, Java, React, Python, LLM, Angular, GCS, GCP Cloud Run, Vertex, Tekton, TERRAFORM, and strong problem-solving abilities. To excel in this role, you should have direct hands-on experience in Google Cloud Platform Architecture, a strong grasp of enterprise integration patterns, security architecture, and DevOps practices. Your demonstrated ability to lead complex technical initiatives and influence stakeholders across business and IT will be critical to your success. The ideal candidate for this position should possess a Bachelor's Degree and demonstrate a commitment to workforce diversity. Join TekWissen Group as an Architect Consultant and contribute to making the world a better place through innovative technological solutions.,
Posted 2 weeks ago
3.0 - 5.0 years
0 Lacs
bengaluru, karnataka, india
On-site
Choosing Capgemini means choosing a company where you will be empowered to shape your career in the way you'd like, where you'll be supported and inspired by a collaborative community of colleagues around the world, and where you'll be able to reimagine what's possible. Join us and help the world's leading organizations unlock the value of technology and build a more sustainable, more inclusive world. Your Role Very good Understanding of current work and the tools and technologies being used. Comprehensive knowledge and clarity on Bigquery, ETL, GCS, Airflow/Composer, SQL, Python. Experience working with Fact and Dimension tables, SCD. Minimum 3 years experience in GCP Data Engineering. Java/ Python/ Spark on GCP, Programming experience in any one language - either Python or Java or PySpark,SQL. GCS(Cloud Storage), Composer (Airflow) and BigQuery experience. Should have worked on handling big data. Your Profile Strong data engineering experience using Java or Python programming languages or Spark on Google Cloud. Pipeline development experience using Dataflow or Dataproc (Apache Beam etc). Any other GCP services or databases like Datastore, Bigtable, Spanner, Cloud Run, Cloud Functions etc. Proven analytical skills and Problem-solving attitude. Excellent Communication Skills. What you'll love about working here .You can shape yourwith us. We offer a range of career paths and internal opportunities within Capgemini group. You will also get personalized career guidance from our leaders. .You will get comprehensive wellness benefits including health checks, telemedicine, insurance with top-ups, elder care, partner coverage or new parent support via flexible work. .You will have theon one of the industry's largest digital learning platforms, with access to 250,000+ courses and numerous certifications. About Capgemini Capgemini is a global business and technology transformation partner, helping organizations to accelerate their dual transition to a digital and sustainable world, while creating tangible impact for enterprises and society. It is a responsible and diverse group of 340,000 team members in more than 50 countries. With its strong over 55-year heritage, Capgemini is trusted by its clients to unlock the value of technology to address the entire breadth of their business needs. It delivers end-to-end services and solutions leveraging strengths from strategy and design to engineering, all fueled by its market leading capabilities in AI, generative AI, cloud and data, combined with its deep industry expertise and partner ecosystem.
Posted 2 weeks ago
10.0 - 12.0 years
0 Lacs
bengaluru, karnataka, india
On-site
Position Summary... Drives the execution of multiple business plans and projects by identifying customer and operational needs; developing and communicating business plans and priorities; removing barriers and obstacles that impact performance; providing resources; identifying performance standards; measuring progress and adjusting performance accordingly; developing contingency plans; and demonstrating adaptability and supporting continuous learning. Provides supervision and development opportunities for associates by selecting and training; mentoring; assigning duties; building a team-based work environment; establishing performance expectations and conducting regular performance evaluations; providing recognition and rewards; coaching for success and improvement; and promoting a belonging mindset in the workplace. Promotes and supports company policies, procedures, mission, values, and standards of ethics and integrity by training and providing direction to others in their use and application; ensuring compliance with them; and utilizing and supporting the Open Door Policy. Ensures business needs are being met by evaluating the ongoing effectiveness of current plans, programs, and initiatives; consulting with business partners, managers, co-workers, or other key stakeholders; soliciting, evaluating, and applying suggestions for improving efficiency and cost-effectiveness; and participating in and supporting community outreach events. What you&aposll do... About The Team The Data and Customer Analytics Team is a strategic unit dedicated to transforming data into actionable insights that drive customer-centric decision-making across the organization. Our mission is to harness the power of data to understand customer behavior, optimize business performance, and enable personalized experiences. Our team is responsible for building and maintaining a centralized, scalable, and secure data platform that consolidates customer-related data from diverse sources across the organization. This team plays a foundational role in enabling data-driven decision-making, advanced analytics, and personalized customer experiences. This team plays a critical role in building trust with customers by implementing robust privacy practices, policies, and technologies that protect personal information throughout its lifecycle. What Youll Do Work with customers and architects to define product specifications, solution design and evolve the product using agile practices. You will have the complete bottom line for ensuring high quality, on-time delivery of product enhancements in a fast paced environment. Be responsible for maintenance of key components of the platform and ensure production up time. Work with operations and customer service teams to identify operational pain points, incorporating feedback into the product, guide the engineers in maintenance team for preventive and reactive maintenance. Analyze the current solution stack, proactively identify architecture improvement opportunities, prepare proposals & prototypes, guide the team to build the NFRs for increased production stability, automate processes as much as possible and reduce manual maintenance effort Be responsible for architecture, technical and domain expertise in the team. The development manager should be a thought leader & self-driven person who can spot opportunities for functional & architecture improvements and willing to rollup sleeves and help team members work on product specifications and design. This is not a pure people manager role. Technical competence, functional understanding, challenging the team for technical solutions proposed, fine tuning solutions will be a key part of day to day work. Build a vibrant, positively motivated team having a high sense of urgency; set the bar high and provide necessary support and mentoring to managers and team members to achieve it. Advocate planning and continuous improvement. Set and communicate clear and aligned goals, monitors progress and ensures leaders in own organization do the same. Sponsor continuous improvement and elimination of non-value added work. Embrace values and implement diverse perspectives and ideas. Develop and communicate logical, convincing justifications, including lessons learnt that build commitment and support for ones perspectives and initiatives. Actively monitor dependencies in a distributed application landscape and work with stakeholders to ensure that dependencies are resolved in a timely fashion. Weekly status reporting, early warnings, mitigations, ensuring delivery per budget. Innovation Drive the strategy and Innovation activities for data staging. Work to establish the Product competitively and Sustainability, with a long-term view in mind (2-4 years). Constantly challenge status quo to drive innovation in the organization. Change Management - Promotes new ways of looking at products, problems and processes. Foster a sense of ownership, empowerment and personal commitment to work. Create a work environment that inspires and encourages people to excel. Talent Development - Identifies required capabilities and skill gaps within organization and invests time in developing the capability. Work in fast paced development environment, interacting with product owners, business analysts, testers, developers and stakeholders across geographical locations. Budgeting - Provide inputs related to budgeting activities based on historical and forecast analysis. Resource Planning - Provide inputs for the resources plan requirements. What Youll bring: Proven working experience in Data Engineering with a minimum of 10-12 years in the field. Strong expertise in Data Engineering skills in Scala and experience with Spark for data processing and analytics Strong expertise with Google Cloud Platform (GCP) services such as BigQuery, GCS, Dataproc etc. Experience of developing near real-time ingestion pipelines using kafka and spark structured streaming. Proven track record of developing enterprise and/or SaaS based distributed applications Experience with message-based systems (Kafka) Experience with distributed databases, distributed computing, and high-frequency transaction environments is a plus Demonstrated ability to lead, mentor, and build high-performing teams in a fast-paced setting Strong business and technical vision able to drive strategy and execution Experience shipping software on time and managing end-to-end development cycles Excellent interpersonal, written, and verbal communication skills About Walmart Global Tech Imagine working in an environment where one line of code can make life easier for hundreds of millions of people. Thats what we do at Walmart Global Tech. Were a team of software engineers, data scientists, cybersecurity expert&aposs and service professionals within the worlds leading retailer who make an epic impact and are at the forefront of the next retail disruption. People are why we innovate, and people power our innovations. We are people-led and tech-empowered. We train our team in the skillsets of the future and bring in experts like you to help us grow. We have roles for those chasing their first opportunity as well as those looking for the opportunity that will define their career. Here, you can kickstart a great career in tech, gain new skills and experience for virtually every industry, or leverage your expertise to innovate at scale, impact millions and reimagine the future of retail. Flexible, hybrid work We use a hybrid way of working with primary in office presence coupled with an optimal mix of virtual presence. We use our campuses to collaborate and be together in person, as business needs require and for development and networking opportunities. This approach helps us make quicker decisions, remove location barriers across our global team, be more flexible in our personal lives. Benefits Beyond our great compensation package, you can receive incentive awards for your performance. Other great perks include a host of best-in-class benefits maternity and parental leave, PTO, health benefits, and much more. Belonging We aim to create a culture where every associate feels valued for who they are, rooted in respect for the individual. Our goal is to foster a sense of belonging, to create opportunities for all our associates, customers and suppliers, and to be a Walmart for everyone. At Walmart, our vision is "everyone included." By fostering a workplace culture where everyone isand feelsincluded, everyone wins. Our associates and customers reflect the makeup of all 19 countries where we operate. By making Walmart a welcoming place where all people feel like they belong, were able to engage associates, strengthen our business, improve our ability to serve customers, and support the communities where we operate. Equal Opportunity Employer Walmart, Inc., is an Equal Opportunities Employer By Choice. We believe we are best equipped to help our associates, customers and the communities we serve live better when we really know them. That means understanding, respecting and valuing unique styles, experiences, identities, ideas and opinions while being inclusive of all people. Minimum Qualifications... Outlined below are the required minimum qualifications for this position. If none are listed, there are no minimum qualifications. Minimum Qualifications:Option 1: Bachelor&aposs degree in computer science, computer engineering, computer information systems, software engineering, or related area and 5 years experience in software engineering or related area. Option 2: 7 years experience in software engineering or related area. 2 years supervisory experience. Preferred Qualifications... Outlined below are the optional preferred qualifications for this position. If none are listed, there are no preferred qualifications. Masters degree in computer science, computer engineering, computer information systems, software engineering, or related area and 3 years' experience in software engineering or related area. Primary Location... BLOCK- 1, PRESTIGE TECH PACIFIC PARK, SY NO. 38/1, OUTER RING ROAD KADUBEESANAHALLI, , India R-2270344 Show more Show less
Posted 3 weeks ago
6.0 - 9.0 years
9 - 18 Lacs
pune
Work from Office
UI/UX Lead for Drone GCS & Fleet Management Software JOB TITLE : UI/UX Lead JOB LOCATION : Pune DESIGNATION : Lead DIVISION / DEPARTMENT : UI/UX JOB PURPOSE We are seeking a visionary and hands-on UI/UX Lead or Manager to own the design and user experience strategy for our drone ground control station (GCS) and fleet management platform. This role combines product thinking, human-centered design, and system-level UX to create interfaces that are not just functional but intuitive, safe, and delightful to use. DUTIES AND RESPONSIBILITIES 1. Lead the end-to-end UI/UX design lifecycle for web, desktop, and tablet-based GCS and fleet software 2. Design operator dashboards, mission planning workflows, live telemetry views, and map interfaces 3. Create wireframes, user flows, prototypes, and high-fidelity designs using tools like Figma or Adobe XD 4. Collaborate closely with product managers, drone pilots, and developers to define intuitive interactions 5. Conduct contextual research with users (e.g. field teams, drone operators, mission planners) 6. Optimize information hierarchy for real-time monitoring, alerts, mission logs, and fleet health 7. Ensure a cohesive design language across modules including video feed, map overlays, telemetry, battery management, and mission planner 8. Lead and mentor a small design team (2–3 designers), and contribute individually when needed 9. Work with the frontend team to implement and QA UI components using design systems 10. Champion usability, accessibility, and safety-critical UX patterns relevant to autonomous systems Skills Required 1. Portfolio showcasing dashboards, map-based UI, or control interfaces 2. Experience designing systems with real-time data, geospatial interfaces, or mission planning 3. Strong command of design tools: Figma, Sketch, Adobe XD, Framer, etc. 4. Ability to create interactive prototypes and design specs for dev handoff 5. Strong UX research, information architecture, and wireframing skills Competitive advantage 1. Familiarity with drone systems, avionics, robotics, or SCADA-like dashboards 2. Understanding of map SDKs like Mapbox, Leaflet, Cesium, or OpenLayers 3. Knowledge of safety-critical UX design patterns or situational awareness principles 4. Experience in designing GCS or telemetry-heavy interfaces for aerospace or defense 5. Familiarity with WebGL, Three.js, or 3D visualizations in UX Leadership & Collaboration 1. Guide product design vision for the GCS and drone command platforms 2. Collaborate with stakeholders from software, flight ops, QA, and hardware integration 3. Drive design reviews, usability tests, and feedback loops 4. Advocate for users while balancing engineering and regulatory constraints Personal Attributes 1. Systems thinker with meticulous attention to detail 2. Passionate about drones, autonomy, and human-robot interaction 3. Empathy-driven leader who mentors and empowers other designers 4. Comfortable working in fast-paced, iterative, and high-impact environments Bonus 1. Contributions to open-source UX libraries or drone tools 2. Experience with Flutter, React, or Qt for cross-platform deployment 3. Experience working in agile product teams or with ISO 26262 / DO-178-style processes QUALIFICATIONS MIN. EXPERIENCE REQ. 6+ years of UI/UX experience in complex B2B or technical products MIN. Qualification Req. Bachelor’s/Master’s in Design, HCI, Interaction Design, or related field
Posted 3 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
73564 Jobs | Dublin
Wipro
27625 Jobs | Bengaluru
Accenture in India
22690 Jobs | Dublin 2
EY
20638 Jobs | London
Uplers
15021 Jobs | Ahmedabad
Bajaj Finserv
14304 Jobs |
IBM
14148 Jobs | Armonk
Accenture services Pvt Ltd
13138 Jobs |
Capgemini
12942 Jobs | Paris,France
Amazon.com
12683 Jobs |