Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
10.0 - 12.0 years
0 Lacs
bengaluru, karnataka, india
On-site
Position Summary... Drives the execution of multiple business plans and projects by identifying customer and operational needs; developing and communicating business plans and priorities; removing barriers and obstacles that impact performance; providing resources; identifying performance standards; measuring progress and adjusting performance accordingly; developing contingency plans; and demonstrating adaptability and supporting continuous learning. Provides supervision and development opportunities for associates by selecting and training; mentoring; assigning duties; building a team-based work environment; establishing performance expectations and conducting regular performance evaluations; providing recognition and rewards; coaching for success and improvement; and promoting a belonging mindset in the workplace. Promotes and supports company policies, procedures, mission, values, and standards of ethics and integrity by training and providing direction to others in their use and application; ensuring compliance with them; and utilizing and supporting the Open Door Policy. Ensures business needs are being met by evaluating the ongoing effectiveness of current plans, programs, and initiatives; consulting with business partners, managers, co-workers, or other key stakeholders; soliciting, evaluating, and applying suggestions for improving efficiency and cost-effectiveness; and participating in and supporting community outreach events. What you&aposll do... About The Team The Data and Customer Analytics Team is a strategic unit dedicated to transforming data into actionable insights that drive customer-centric decision-making across the organization. Our mission is to harness the power of data to understand customer behavior, optimize business performance, and enable personalized experiences. Our team is responsible for building and maintaining a centralized, scalable, and secure data platform that consolidates customer-related data from diverse sources across the organization. This team plays a foundational role in enabling data-driven decision-making, advanced analytics, and personalized customer experiences. This team plays a critical role in building trust with customers by implementing robust privacy practices, policies, and technologies that protect personal information throughout its lifecycle. What Youll Do Work with customers and architects to define product specifications, solution design and evolve the product using agile practices. You will have the complete bottom line for ensuring high quality, on-time delivery of product enhancements in a fast paced environment. Be responsible for maintenance of key components of the platform and ensure production up time. Work with operations and customer service teams to identify operational pain points, incorporating feedback into the product, guide the engineers in maintenance team for preventive and reactive maintenance. Analyze the current solution stack, proactively identify architecture improvement opportunities, prepare proposals & prototypes, guide the team to build the NFRs for increased production stability, automate processes as much as possible and reduce manual maintenance effort Be responsible for architecture, technical and domain expertise in the team. The development manager should be a thought leader & self-driven person who can spot opportunities for functional & architecture improvements and willing to rollup sleeves and help team members work on product specifications and design. This is not a pure people manager role. Technical competence, functional understanding, challenging the team for technical solutions proposed, fine tuning solutions will be a key part of day to day work. Build a vibrant, positively motivated team having a high sense of urgency; set the bar high and provide necessary support and mentoring to managers and team members to achieve it. Advocate planning and continuous improvement. Set and communicate clear and aligned goals, monitors progress and ensures leaders in own organization do the same. Sponsor continuous improvement and elimination of non-value added work. Embrace values and implement diverse perspectives and ideas. Develop and communicate logical, convincing justifications, including lessons learnt that build commitment and support for ones perspectives and initiatives. Actively monitor dependencies in a distributed application landscape and work with stakeholders to ensure that dependencies are resolved in a timely fashion. Weekly status reporting, early warnings, mitigations, ensuring delivery per budget. Innovation Drive the strategy and Innovation activities for data staging. Work to establish the Product competitively and Sustainability, with a long-term view in mind (2-4 years). Constantly challenge status quo to drive innovation in the organization. Change Management - Promotes new ways of looking at products, problems and processes. Foster a sense of ownership, empowerment and personal commitment to work. Create a work environment that inspires and encourages people to excel. Talent Development - Identifies required capabilities and skill gaps within organization and invests time in developing the capability. Work in fast paced development environment, interacting with product owners, business analysts, testers, developers and stakeholders across geographical locations. Budgeting - Provide inputs related to budgeting activities based on historical and forecast analysis. Resource Planning - Provide inputs for the resources plan requirements. What Youll bring: Proven working experience in Data Engineering with a minimum of 10-12 years in the field. Strong expertise in Data Engineering skills in Scala and experience with Spark for data processing and analytics Strong expertise with Google Cloud Platform (GCP) services such as BigQuery, GCS, Dataproc etc. Experience of developing near real-time ingestion pipelines using kafka and spark structured streaming. Proven track record of developing enterprise and/or SaaS based distributed applications Experience with message-based systems (Kafka) Experience with distributed databases, distributed computing, and high-frequency transaction environments is a plus Demonstrated ability to lead, mentor, and build high-performing teams in a fast-paced setting Strong business and technical vision able to drive strategy and execution Experience shipping software on time and managing end-to-end development cycles Excellent interpersonal, written, and verbal communication skills About Walmart Global Tech Imagine working in an environment where one line of code can make life easier for hundreds of millions of people. Thats what we do at Walmart Global Tech. Were a team of software engineers, data scientists, cybersecurity expert&aposs and service professionals within the worlds leading retailer who make an epic impact and are at the forefront of the next retail disruption. People are why we innovate, and people power our innovations. We are people-led and tech-empowered. We train our team in the skillsets of the future and bring in experts like you to help us grow. We have roles for those chasing their first opportunity as well as those looking for the opportunity that will define their career. Here, you can kickstart a great career in tech, gain new skills and experience for virtually every industry, or leverage your expertise to innovate at scale, impact millions and reimagine the future of retail. Flexible, hybrid work We use a hybrid way of working with primary in office presence coupled with an optimal mix of virtual presence. We use our campuses to collaborate and be together in person, as business needs require and for development and networking opportunities. This approach helps us make quicker decisions, remove location barriers across our global team, be more flexible in our personal lives. Benefits Beyond our great compensation package, you can receive incentive awards for your performance. Other great perks include a host of best-in-class benefits maternity and parental leave, PTO, health benefits, and much more. Belonging We aim to create a culture where every associate feels valued for who they are, rooted in respect for the individual. Our goal is to foster a sense of belonging, to create opportunities for all our associates, customers and suppliers, and to be a Walmart for everyone. At Walmart, our vision is "everyone included." By fostering a workplace culture where everyone isand feelsincluded, everyone wins. Our associates and customers reflect the makeup of all 19 countries where we operate. By making Walmart a welcoming place where all people feel like they belong, were able to engage associates, strengthen our business, improve our ability to serve customers, and support the communities where we operate. Equal Opportunity Employer Walmart, Inc., is an Equal Opportunities Employer By Choice. We believe we are best equipped to help our associates, customers and the communities we serve live better when we really know them. That means understanding, respecting and valuing unique styles, experiences, identities, ideas and opinions while being inclusive of all people. Minimum Qualifications... Outlined below are the required minimum qualifications for this position. If none are listed, there are no minimum qualifications. Minimum Qualifications:Option 1: Bachelor&aposs degree in computer science, computer engineering, computer information systems, software engineering, or related area and 5 years experience in software engineering or related area. Option 2: 7 years experience in software engineering or related area. 2 years supervisory experience. Preferred Qualifications... Outlined below are the optional preferred qualifications for this position. If none are listed, there are no preferred qualifications. Masters degree in computer science, computer engineering, computer information systems, software engineering, or related area and 3 years' experience in software engineering or related area. Primary Location... BLOCK- 1, PRESTIGE TECH PACIFIC PARK, SY NO. 38/1, OUTER RING ROAD KADUBEESANAHALLI, , India R-2270344 Show more Show less
Posted 2 days ago
3.0 - 7.0 years
0 Lacs
maharashtra
On-site
The Databricks Data Engineering Lead role requires a highly skilled individual who will architect and lead the implementation of scalable, high-performance data pipelines and platforms using the Databricks Lakehouse ecosystem. As a Data Engineering Lead, you will be responsible for managing a team of data engineers, establishing best practices, and collaborating with cross-functional stakeholders to unlock advanced analytics, AI/ML, and real-time decision-making capabilities. Your key responsibilities will include leading the design and development of modern data pipelines, data lakes, and lakehouse architectures using Databricks and Apache Spark. You will manage and mentor a team of data engineers, providing technical leadership and fostering a culture of excellence. Additionally, you will architect scalable ETL/ELT workflows to process structured and unstructured data from various sources (cloud, on-prem, streaming), build and maintain Delta Lake tables, and optimize performance for analytics, machine learning, and BI use cases. Collaboration with data scientists, analysts, and business teams to deliver high-quality, trusted, and timely data products is crucial. Ensuring best practices in data quality, governance, lineage, and security, including the use of Unity Catalog and access controls, will also be part of your responsibilities. Integration of Databricks with cloud platforms (AWS, Azure, or GCP) and data tools (Snowflake, Kafka, Tableau, Power BI, etc.) and implementation of CI/CD pipelines for data workflows using tools such as GitHub, Azure DevOps, or Jenkins are essential tasks. It is important to stay current with Databricks innovations and provide recommendations on platform strategy and architecture improvements. Qualifications for this role include a Bachelors or Masters degree in Computer Science, Data Engineering, or a related field. You should have at least 7+ years of experience in data engineering, including 3+ years working with Databricks and Apache Spark. Proven leadership experience in managing and mentoring data engineering teams is required. Proficiency in PySpark, SQL, and experience with Delta Lake, Databricks Workflows, and MLflow are necessary skills. A strong understanding of data modeling, distributed computing, and performance tuning is essential. Familiarity with one or more major cloud platforms (Azure, AWS, GCP) and cloud-native services, experience implementing data governance and security in large-scale environments, and familiarity with real-time data processing using Structured Streaming or Kafka are also expected. Knowledge of data privacy, security frameworks, compliance standards (e.g., PCIDSS, GDPR), exposure to machine learning pipelines, notebooks, and ML Ops practices are additional qualifications required. A Databricks Certified Data Engineer or equivalent certification is preferred.,
Posted 3 weeks ago
2.0 - 6.0 years
0 Lacs
ahmedabad, gujarat
On-site
You will be responsible for designing, developing, and maintaining scalable data pipelines using Azure Databricks. Your role will involve building and optimizing ETL/ELT processes for structured and unstructured data, collaborating with data scientists, analysts, and business stakeholders, integrating Databricks with Azure Data Lake, Synapse, Data Factory, and Blob Storage, developing real-time data streaming pipelines, and managing data models/data warehouses. Additionally, you will optimize performance, manage resources, ensure cost efficiency, implement best practices for data governance, security, and quality, troubleshoot and improve existing data workflows, contribute to architecture and technology strategy, mentor junior team members, and maintain documentation. To excel in this role, you should have a Bachelor's/Master's degree in Computer Science, IT, or a related field, along with 5+ years of Data Engineering experience (minimum 2+ years with Databricks). Strong expertise in Azure cloud services (Data Lake, Synapse, Data Factory), proficiency in Spark (PySpark/Scala) and big data processing, experience with Delta Lake, Structured Streaming, and real-time pipelines, strong SQL skills, an understanding of data modeling and warehousing, familiarity with DevOps tools like CI/CD, Git, Terraform, Azure DevOps, excellent problem-solving and communication skills are essential. Preferred qualifications include Databricks Certified (Associate/Professional), experience with machine learning workflows on Databricks, knowledge of data governance tools like Purview, experience with REST APIs, Kafka, Event Hubs, cloud performance tuning, and cost optimization experience. Join us to be a part of a supportive and collaborative team, work with a growing company in the exciting BI and Data industry, enjoy a competitive salary and performance-based bonuses, and have opportunities for professional growth and development. If you are interested in this opportunity, please send your resume to hr@exillar.com and fill out the form at https://forms.office.com/r/HdzMNTaagw.,
Posted 3 weeks ago
4.0 - 8.0 years
0 Lacs
pune, maharashtra
On-site
As a Big Data Architect specializing in Databricks at Codvo, a global empathy-led technology services company, your role is critical in designing sophisticated data solutions that drive business value for enterprise clients and power internal AI products. Your expertise will be instrumental in architecting scalable, high-performance data lakehouse platforms and end-to-end data pipelines, making you the go-to expert for modern data architecture in a cloud-first world. Your key responsibilities will include designing and documenting robust, end-to-end big data solutions on cloud platforms (AWS, Azure, GCP) with a focus on the Databricks Lakehouse Platform. You will provide technical guidance and oversight to data engineering teams on best practices for data ingestion, transformation, and processing using Spark. Additionally, you will design and implement effective data models and establish data governance policies for data quality, security, and compliance within the lakehouse. Evaluating and recommending appropriate data technologies, tools, and frameworks to meet project requirements and collaborating closely with various stakeholders to translate complex business requirements into tangible technical architecture will also be part of your role. Leading and building Proof of Concepts (PoCs) to validate architectural approaches and new technologies in the big data and AI space will be crucial. To excel in this role, you should have 10+ years of experience in data engineering, data warehousing, or software engineering, with at least 4+ years in a dedicated Data Architect role. Deep, hands-on expertise with Apache Spark and the Databricks platform is mandatory, including Delta Lake, Unity Catalog, and Structured Streaming. Proven experience architecting and deploying data solutions on major cloud providers, proficiency in Python or Scala, expert-level SQL skills, strong understanding of modern AI concepts, and in-depth knowledge of data warehousing concepts and modern Lakehouse patterns are essential. This position is remote and based in India with working hours from 2:30 PM to 11:30 PM. Join us at Codvo and be a part of a team that values Product innovation, mature software engineering, and core values like Respect, Fairness, Growth, Agility, and Inclusiveness each day to offer expertise, outside-the-box thinking, and measurable results.,
Posted 3 weeks ago
4.0 - 6.0 years
20 - 30 Lacs
Gurugram
Work from Office
Key Skills: Spark, Scala, Flink, Big Data, Structured Streaming, Data Architecture, Data Modeling, NoSQL, AWS, Azure, GCP, JVM tuning, Performance Optimization. Roles & Responsibilities: Design and build robust data architectures for large-scale data processing. Develop and maintain data models and database designs. Work on stream processing engines like Spark Structured Streaming and Flink. Perform analytical processing on Big Data using Spark. Administer, configure, monitor, and tune performance of Spark workloads and distributed JVM-based systems. Lead and support cloud deployments across AWS, Azure, or Google Cloud Platform. Manage and deploy Big Data technologies such as Business Data Lakes and NoSQL databases. Experience Requirements: Extensive experience working with large data sets and Big Data technologies. 4-6 years of hands-on experience in Spark/Big Data tech stack. At least 4 years of experience in Scala. At least 2+ years of experience in cloud deployment (AWS, Azure, or GCP). Successfully completed at least 2 product deployments involving Big Data technologies. Education: B.Tech M.Tech (Dual), B.Tech.
Posted 1 month ago
0.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Genpact (NYSE: G) is a global professional services and solutions firm delivering outcomes that shape the future. Our 125,000+ people across 30+ countries are driven by our innate curiosity, entrepreneurial agility, and desire to create lasting value for clients. Powered by our purpose - the relentless pursuit of a world that works better for people - we serve and transform leading enterprises, including the Fortune Global 500, with our deep business and industry knowledge, digital operations services, and expertise in data, technology, and AI. Inviting applications for the role of Lead Consultant/ Data Engineer. In this role, you will collaborate closely with cross-functional teams, including developers, business analysts, and stakeholders, to deliver high-quality software solutions that enhance operational efficiency and support strategic business objectives. Responsibilities . Provide technical leadership and architectural guidance on Data engineer projects. . Design and implement data pipelines, data lakes, and data warehouse solutions using the Data engineer. . Optimize Spark-based data workflows for performance, scalability, and cost-efficiency. . Ensure robust data governance and security, including the implementation of Unity Catalog. . Collaborate with data scientists, business users, and engineering teams to align solutions with business goals. . Stay updated with evolving Data engineer features, best practices, and industry trends. . Proven expertise in Data engineering, including Spark, Delta Lake, and Unity Catalog. . Strong background in data engineering, with hands-on experience in building production-grade data pipelines and lakes. . Proficient in Python (preferred) or Scala for data transformation and automation. . Strong command of SQL and Spark SQL for data querying and processing. . Experience with cloud platforms such as Azure, AWS, or GCP. . Familiarity with DevOps/DataOps practices in data pipeline development. . Knowledge of Profisee or other Master Data Management (MDM) tools is a plus. . Certifications in Data Engineering or Spark. . Experience with Delta Live Tables, structured streaming, or metadata-driven frameworks . Development of new reports and updating the existing reports as requested by customers. . Automate the respective reports by the creation of config files. . Validate the premium in the reports against the IMS application to ensure there are no discrepancies by creation of config file. . Validation of all the reports that run on a monthly basis and to analyze the respective reports if there is any discrepancy in Qualifications we seek in you! Minimum Qualifications . BE/ B Tech/ MCA Preferred Qualifications/ Skills . Excellent analytical, problem-solving, communication and interpersonal skills . Able to work effectively in a fast-paced, sometimes stressful environment, and deliver production quality software within tight schedules . Must be results-oriented, self-motivated and have the ability to thrive in a fast-paced environment . Strong Specialty Insurance domain & IT knowledge Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color, religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Get to know us at genpact.com and on LinkedIn, X, YouTube, and Facebook. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a %27starter kit,%27 paying to apply, or purchasing equipment or training.
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
57101 Jobs | Dublin
Wipro
24505 Jobs | Bengaluru
Accenture in India
19467 Jobs | Dublin 2
EY
17463 Jobs | London
Uplers
12745 Jobs | Ahmedabad
IBM
12087 Jobs | Armonk
Bajaj Finserv
11514 Jobs |
Amazon
11498 Jobs | Seattle,WA
Accenture services Pvt Ltd
10993 Jobs |
Oracle
10696 Jobs | Redwood City