Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
10.0 - 15.0 years
14 - 16 Lacs
Pune
Work from Office
Here is how, through this exciting role, YOU will contribute to BMC's and your own success: Play a vital role in project design to ensure scalability, reliability, and performance are met Design and develop new features as well as maintain existing features by adding improvements and fixing defects in complex areas (using Java) Assist in troubleshooting complex technical problems in development and production Implement methodologies, processes & tools Initiate projects and ideas to improve the teams results On-board and mentor new employees To ensure youre set up for success, you will bring the following skillset & experience: You have 10+ years of experience in Java Backend development You have experience as a Backend Tech Lead and must have worked on Scala. You have experience in Spring, Swagger, REST API You worked with Spring Boot, Docker, Kubernetes You are a self-learner whos passionate about problem solving and technology You are a team player with good communication skills in English (verbal and written) Whilst these are nice to have, our team can help you develop in the following skills: Public Cloud (AWS, Azure, GCP) Python, Node.js, C/C++ Automation Frameworks such as Robot Framework
Posted 1 week ago
8.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
The Applications Development Senior Programmer Analyst is an intermediate level position responsible for participation in the establishment and implementation of new or revised application systems and programs in coordination with the Technology team. The overall objective of this role is to contribute to applications systems analysis and programming activities. Responsibilities: Conduct tasks related to feasibility studies, time and cost estimates, IT planning, risk technology, applications development, model development, and establish and implement new or revised applications systems and programs to meet specific business needs or user areas Monitor and control all phases of development process and analysis, design, construction, testing, and implementation as well as provide user and operational support on applications to business users Utilize in-depth specialty knowledge of applications development to analyze complex problems/issues, provide evaluation of business process, system process, and industry standards, and make evaluative judgement Recommend and develop security measures in post implementation analysis of business usage to ensure successful system design and functionality Consult with users/clients and other technology groups on issues, recommend advanced programming solutions, and install and assist customer exposure systems Ensure essential procedures are followed and help define operating standards and processes Serve as advisor or coach to new or lower level analysts Has the ability to operate with a limited level of direct supervision. Can exercise independence of judgement and autonomy. Acts as SME to senior stakeholders and /or other team members. Appropriately assess risk when business decisions are made, demonstrating particular consideration for the firm's reputation and safeguarding Citigroup, its clients and assets, by driving compliance with applicable laws, rules and regulations, adhering to Policy, applying sound ethical judgment regarding personal behavior, conduct and business practices, and escalating, managing and reporting control issues with transparency. Qualifications: 8 to 20 years of relevant experience Primary skills - Java / Scala + Spark Must have experience in - Hadoop/Java/Spark/Scala/Python Experience in systems analysis and programming of software applications Experience in managing and implementing successful projects Working knowledge of consulting/project management techniques/methods Ability to work under pressure and manage deadlines or unexpected changes in expectations or requirements Education: Bachelor’s degree/University degree or equivalent experience This job description provides a high-level review of the types of work performed. Other job-related duties may be assigned as required. ------------------------------------------------------ Job Family Group: Technology ------------------------------------------------------ Job Family: Applications Development ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster.
Posted 1 week ago
3.0 - 8.0 years
8 - 18 Lacs
Hyderabad
Remote
Dear Folks, Greetings !!! This is with reference to your profile posted in Naukri.com, We have an immediate openings for the below requirement. Please find the given job description, more details about company. Let me know your interest . About Tao Digital India: Tao Digital is a global technology company with a mission to unlock limitless digital innovation through solutions and services that help our clients reimagine their businesses through a digital lens. Our software and engineering heritage, strategic business and innovation consulting, design thinking, physical-to-digital competencies, and re-engineering skills provide real business value to our customers and help them thrive in the new digital economy and we have 3500+ employees. Tao Digital operates in the US, Canada, India, LATAM, and Nigeria Join the Team TAO and become part of a team of innovators and visionaries, leading the way in the future of business and making a real impact globally. Job Description: Highly skilled Big Data Engineer with expertise in distributed systems, and advanced programming techniques. Candidate will possess experience in developing highly scalable Big Data pipelines, strong technical knowledge, thought leadership, and a passion for creating scalable, high-performance systems. Must Skillsets : Big Data, Scala, AWS, CICD Key Responsibilities Design, Develop, and optimize distributed systems and Big Data solutions. Implement and maintain pipelines using Scala, Python, AWS, CICD Build and maintain CI/CD pipelines to ensure efficient code deployment and integration. Apply design patterns, optimization techniques, and locking principles to enhance system performance and reliability. Scale systems and optimize performance Required Qualifications Proven experience working with distributed systems and Big Data technologies. Proficiency in Scala, Python, Java, C++ and related tools. In-depth understanding of design patterns and system optimisation principles. Hands-on experience with batch and streaming data pipelines. Familiarity with CI/CD pipelines
Posted 1 week ago
5.0 - 9.0 years
15 - 25 Lacs
Bengaluru
Work from Office
Primary Skill Set: Data Engineering, Python,Pyspark,Cloud (AWS/GCP), SCALA. Primary Skill: Snowflake, Cloud (AWS, GCP), SCALA, Python, Spark, Big Data and SQL. QUALIFICATION: Bachelors or masters degree JOB RESPONSIBILITY: strong development experience in Snowflake, Cloud (AWS, GCP), SCALA, Python, Spark, Big Data and SQL. Work closely with stakeholders, including product managers and designers, to align technical solutions with business goals. Maintain code quality through reviews and make architectural decisions that impact scalability and performance. Performs Root cause Analysis for any critical defects and address technical challenges, optimize workflows, and resolve issues efficiently. Expert in Agile, Waterfall Program/Project mplementation. Manages strategic and tactical relationships with program stakeholders. Successfully executing projects within strict deadlines while managing intense pressure. Good understanding of SDLC (Software Development Life Cycle) Identify potential technical risks and implement mitigation strategies Excellent verbal, written, and interpersonal communication abilities, coupled with strong problem-solving, facilitation, and analytical skills. Cloud Management Activities – To have a good understanding of the cloud architecture /containerization and application management on AWS and Kubernetes, to have in
Posted 1 week ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Eviden, part of the Atos Group, with an annual revenue of circa € 5 billion is a global leader in data-driven, trusted and sustainable digital transformation. As a next generation digital business with worldwide leading positions in digital, cloud, data, advanced computing and security, it brings deep expertise for all industries in more than 47 countries. By uniting unique high-end technologies across the full digital continuum with 47,000 world-class talents, Eviden expands the possibilities of data and technology, now and for generations to come. Roles and Responsibility The Senior Tech Lead - Databricks leads the design, development, and implementation of advanced data solutions. Has To have extensive experience in Databricks, cloud platforms, and data engineering, with a proven ability to lead teams and deliver complex projects. Responsibilities Lead the design and implementation of Databricks-based data solutions. Architect and optimize data pipelines for batch and streaming data. Provide technical leadership and mentorship to a team of data engineers. Collaborate with stakeholders to define project requirements and deliverables. Ensure best practices in data security, governance, and compliance. Troubleshoot and resolve complex technical issues in Databricks environments. Stay updated on the latest Databricks features and industry trends. Key Technical Skills & Responsibilities Experience in data engineering using Databricks or Apache Spark-based platforms. Proven track record of building and optimizing ETL/ELT pipelines for batch and streaming data ingestion. Hands-on experience with Azure services such as Azure Data Factory, Azure Data Lake Storage, Azure Databricks, Azure Synapse Analytics, or Azure SQL Data Warehouse. Proficiency in programming languages such as Python, Scala, SQL for data processing and transformation. Expertise in Spark (PySpark, Spark SQL, or Scala) and Databricks notebooks for large-scale data processing. Familiarity with Delta Lake, Delta Live Tables, and medallion architecture for data lakehouse implementations. Experience with orchestration tools like Azure Data Factory or Databricks Jobs for scheduling and automation. Design and implement the Azure key vault and scoped credentials. Knowledge of Git for source control and CI/CD integration for Databricks workflows, cost optimization, performance tuning. Familiarity with Unity Catalog, RBAC, or enterprise-level Databricks setups. Ability to create reusable components, templates, and documentation to standardize data engineering workflows is a plus. Ability to define best practices, support multiple projects, and sometimes mentor junior engineers is a plus. Must have experience of working with streaming data sources and Kafka (preferred) Eligibility Criteria Bachelor’s degree in Computer Science, Data Engineering, or a related field Extensive experience with Databricks, Delta Lake, PySpark, and SQL Databricks certification (e.g., Certified Data Engineer Professional) Experience with machine learning and AI integration in Databricks Strong understanding of cloud platforms (AWS, Azure, or GCP) Proven leadership experience in managing technical teams Excellent problem-solving and communication skills Our Offering Global cutting-edge IT projects that shape the future of digital and have a positive impact on environment. Wellbeing programs & work-life balance - integration and passion sharing events. Attractive Salary and Company Initiative Benefits Courses and conferences Attractive Salary Hybrid work culture Let’s grow together.
Posted 1 week ago
5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Responsibilities JOB DESCRIPTION Demonstrate a deep knowledge of, and ability to operationalize, leading data technologies and best practice Partner end-to-end with Product Managers and Data Scientists to understand customer requirements and design prototypes and bring ideas to production We develop real products. You need to be an expert in design, coding, and scripting Facilitate problem diagnosis and resolution in technical and functional areas Encourage change, especially in support of data engineering best practices and developer satisfaction Write high-quality code that is consistent with our standards, creating new standards as necessary Demonstrate correctness with pragmatic automated tests Review the work of other engineers in a collegial fashion to promote and improve quality and engineering practices Develop strong working relationships with others across levels and functions Participate in, and potentially coordinate, Communities-of-Practice in those technologies in which you have an interest Participate in continuing education programs to grow your skills both technically and in the Williams-Sonoma business domain Serve as a member of an agile engineering team and participate in the team's workflow Criteria 5 years of experience as a professional software engineer 3 - 5 years of experience with big data technologies Experience in building, distributed, scalable, and reliable data pipelines that ingest and process data at scale and in batch and real-time Strong knowledge of programming languages/tools including Spark, SQL, Python, Java, Scala, Hive, and Elasticsearch Experience with streaming technologies such as Spark streaming, Flink, or Apache Beam Experience with various messaging systems such as Kafka Experience in implementing Lambda Architecture Working experience with various SQL and NoSQL databases such as Snowflake, Cassandra, HBase, MongoDB, and/or Couchbase Working experience with various time-series databases such as OpenTSDB and Apache Druid Familiarity with ML and Deep Learning Working knowledge of various columnar storage such as Parquet, Kudu, and ORC An understanding of software development best practice Enthusiasm for constant improvement as a Data Engineer Ability to review and critique code and proposed designs, and offer thoughtful feedback in a collegial fashion Skilled in writing and presenting -- able to craft needed messages so they are clearly expressed and easily understood Ability to work independently on complex problems of varying complexity and scope Bachelor's degree in Computer Science, Engineering or equivalent work experience About Us Founded in 1956, Williams-Sonoma Inc. is the premier specialty retailer of high-quality products for the kitchen and home in the United States. Today, Williams-Sonoma, Inc. is one of the United States' largest e-commerce retailers with some of the best known and most beloved brands in home furnishings. Our family of brands are Williams-Sonoma, Pottery Barn, Pottery Barn Kids, Pottery Barn Teen, West Elm, Williams-Sonoma Home, Rejuvenation, GreenRow and Mark and Graham. We currently operate retail stores globally. Our products are also available to customers through our catalogs and online worldwide. Williams-Sonoma has established a technology center in Pune, India to enhance its global operations. The India Technology Center serves as a critical hub for innovation and focuses on developing cutting-edge solutions in areas such as e-commerce, supply chain optimization, and customer experience management. By integrating advanced technologies like artificial intelligence, data analytics, and machine learning, the India Technology Center plays a crucial role in accelerating Williams-Sonoma's growth and maintaining its competitive edge in the global market
Posted 1 week ago
8.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Description: About Us At Bank of America, we are guided by a common purpose to help make financial lives better through the power of every connection. Responsible Growth is how we run our company and how we deliver for our clients, teammates, communities, and shareholders every day. One of the keys to driving Responsible Growth is being a great place to work for our teammates around the world. We’re devoted to being a diverse and inclusive workplace for everyone. We hire individuals with a broad range of backgrounds and experiences and invest heavily in our teammates and their families by offering competitive benefits to support their physical, emotional, and financial well-being. Bank of America believes both in the importance of working together and offering flexibility to our employees. We use a multi-faceted approach for flexibility, depending on the various roles in our organization. Working at Bank of America will give you a great career with opportunities to learn, grow and make an impact, along with the power to make a difference. Join us! Global Business Services Global Business Services delivers Technology and Operations capabilities to Lines of Business and Staff Support Functions of Bank of America through a centrally managed, globally integrated delivery model and globally resilient operations. Global Business Services is recognized for flawless execution, sound risk management, operational resiliency, operational excellence, and innovation. In India, we are present in five locations and operate as BA Continuum India Private Limited (BACI), a non-banking subsidiary of Bank of America Corporation and the operating company for India operations of Global Business Services. Process Overview* Data, Analytics & Insights Technology (DAIT) provides customer, client, and operational data in support of Consumer, Business, Wealth, and Payments Technology with responsibility for a number of key data technologies. These include 16 Authorized Data Sources (ADS), marketing and insights platforms, advanced analytics Platforms, core client data and more. DAIT drives these capabilities with the goal of maximizing data assets to serve bank operations, meet regulatory requirements and personalize interactions with our customers across all channels. GBDART, a sub-function of DAIT, is the Bank’s strategic initiative to modernize data architecture and enable cloud-based, connected data experiences for analytics and insights across commercial banking. Job Description* The candidate must be Strong Java, J2EE experience with Micro Services Development on Cloud and On Prem Environments. Experience to work on large scale enterprise applications using Java, Web services, Streaming, Realtime to work on large and complex systems in all phases of SDLC. Good experience on creating and deploying services using scala and Java with CI/CD implementation. Good to have experience on Graph database with data modeling routines using RDF. Responsibilities* Understand the business requirement and perform gap analysis. Strong experience on Micro Services using Java, Containers. Provide technical solution and develop SOAP/REST Web service using Java, Spring. Debugging of Code effective log management. Prepare unit test case and run through Junit. Deploy and manage code through CI/CD pipelines. Refractor existing code to enhance readability, performance, and general structure. Strong SQL databases experience and write complex SQL queries. Provide assistance to testing team, where necessary to aid in testing and test case creation. Provide guidance to team developers with design, implementation, and completion. Follow the agile methodology. Work with onsite team to determine needs and apply/customize existing technology to meet those requirements. Maintain existing software systems by identifying and correcting software defects. Maintain and support multiple projects and deadlines. Document and report application specifics. Create technical specifications and test plans. Provide weekend on call support during application releases. Requirements* Education* Certifications If Any: NA. Experience Range* 08 Years To 15 Years. Foundational Skills* 8-15 years Java development experience. Must have experience to drive the project Technically. Strong experience on Java, Spring; Web Services - SOAP/REST. Good to have experience on graph database, rdf. Exp with Application Servers such as WebLogic, JMS, EJB. JDBC / SQL Programming. SBT, Autosys, Bitbucket, Jenkins. Must be detailed oriented and a quick learner. Have strong communication skills both verbal and written. Able to work independently as well as with teams in a proactive manor. Desired Skills* Adoptability to quickly learn and deliver on internal frameworks. Ability to work on multiple projects and be flexible to adapt to changing requirements. Willingness to embrace and learn new technologies. Must be an effective communicator. Work Timings* General Shift (11:00 AM to 8:00 PM). Job Location* Chennai, GIFT.
Posted 1 week ago
5.0 - 10.0 years
15 - 25 Lacs
Hyderabad/Secunderabad, Bangalore/Bengaluru, Delhi / NCR
Hybrid
Genpact (NYSE: G) is a global professional services and solutions firm delivering outcomes that shape the future. Our 125,000+ people across 30+ countries are driven by our innate curiosity, entrepreneurial agility, and desire to create lasting value for clients. Powered by our purpose the relentless pursuit of a world that works better for people – we serve and transform leading enterprises, including the Fortune Global 500, with our deep business and industry knowledge, digital operations services, and expertise in data, technology, and AI. Inviting applications for the role of Lead Consultant-Data Engineer, AWS+Python, Spark, Kafka for ETL! Responsibilities Develop, deploy, and manage ETL pipelines using AWS services, Python, Spark, and Kafka. Integrate structured and unstructured data from various data sources into data lakes and data warehouses. Design and deploy scalable, highly available, and fault-tolerant AWS data processes using AWS data services (Glue, Lambda, Step, Redshift) Monitor and optimize the performance of cloud resources to ensure efficient utilization and cost-effectiveness. Implement and maintain security measures to protect data and systems within the AWS environment, including IAM policies, security groups, and encryption mechanisms. Migrate the application data from legacy databases to Cloud based solutions (Redshift, DynamoDB, etc) for high availability with low cost Develop application programs using Big Data technologies like Apache Hadoop, Apache Spark, etc with appropriate cloud-based services like Amazon AWS, etc. Build data pipelines by building ETL processes (Extract-Transform-Load) Implement backup, disaster recovery, and business continuity strategies for cloud-based applications and data. Responsible for analysing business and functional requirements which involves a review of existing system configurations and operating methodologies as well as understanding evolving business needs Analyse requirements/User stories at the business meetings and strategize the impact of requirements on different platforms/applications, convert the business requirements into technical requirements Participating in design reviews to provide input on functional requirements, product designs, schedules and/or potential problems Understand current application infrastructure and suggest Cloud based solutions which reduces operational cost, requires minimal maintenance but provides high availability with improved security Perform unit testing on the modified software to ensure that the new functionality is working as expected while existing functionalities continue to work in the same way Coordinate with release management, other supporting teams to deploy changes in production environment Qualifications we seek in you! Minimum Qualifications Experience in designing, implementing data pipelines, build data applications, data migration on AWS Strong experience of implementing data lake using AWS services like Glue, Lambda, Step, Redshift Experience of Databricks will be added advantage Strong experience in Python and SQL Proven expertise in AWS services such as S3, Lambda, Glue, EMR, and Redshift. Advanced programming skills in Python for data processing and automation. Hands-on experience with Apache Spark for large-scale data processing. Experience with Apache Kafka for real-time data streaming and event processing. Proficiency in SQL for data querying and transformation. Strong understanding of security principles and best practices for cloud-based environments. Experience with monitoring tools and implementing proactive measures to ensure system availability and performance. Excellent problem-solving skills and ability to troubleshoot complex issues in a distributed, cloud-based environment. Strong communication and collaboration skills to work effectively with cross-functional teams. Preferred Qualifications/ Skills Master’s Degree-Computer Science, Electronics, Electrical. AWS Data Engineering & Cloud certifications, Databricks certifications Experience with multiple data integration technologies and cloud platforms Knowledge of Change & Incident Management process Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color, religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values diversity and inclusion, respect and integrity, customer focus, and innovation. Get to know us at genpact.com and on LinkedIn, X, YouTube, and Facebook. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a 'starter kit,' paying to apply, or purchasing equipment or training.
Posted 1 week ago
3.0 years
1 - 6 Lacs
Hyderābād
On-site
Microsoft is a company where passionate innovators come to collaborate, envision what can be and take their careers to levels they cannot achieve anywhere else. This is a world of more possibilities, more innovation, more openness in a cloud-enabled world. The Business & Industry Copilots group is a rapidly growing organization that is responsible for the Microsoft Dynamics 365 suite of products, Power Apps, Power Automate, Dataverse, AI Builder, Microsoft Industry Solution and more. Microsoft is considered one of the leaders in Software as a Service in the world of business applications and this organization is at the heart of how business applications are designed and delivered. This is an exciting time to join our group BIC Customer Experience and work on something highly strategic to Microsoft. The goal of the Customer Zero Engineering is to build the next generation of our applications running on Dynamics 365, AI, Copilot, and several other Microsoft cloud services to drive AI transformation across Marketing, Sales, Services and Support organizations within Microsoft. We innovate quickly and collaborate closely with our partners and customers in an agile, high-energy environment. Leveraging the scalability and value from Azure & Power Platform, we ensure our solutions are robust and efficient. Our organization’s implementation acts as reference architecture for large companies and helps drive product capabilities. If the opportunity to collaborate with a diverse engineering team, on enabling end-to-end business scenarios using cutting-edge technologies and to solve challenging problems for large scale 24x7 business SaaS applications excite you, please come and talk to us! We are looking for talented and motivated data engineers interested in helping our organization empower learners through producing valuable data that can be used to understand the organization's needs to make the right decisions. We want you for your passion for technology, your curiosity and willingness to learn, your ability to communicate well in a team environment, your desire to make our team better with your contributions, and your ability to deliver. We use industry-standard technology: C#, JavaScript/Typescript, HTML5, ETL/ELT, Data warehousing, and/ or Business Intelligence Development. Responsibilities Implement scalable data models, data pipelines, data storage, management, and transformation solutions for real-time decisioning, reporting, data collecting, and related functions. Leveraging machine learning(ML) models knowledge and implement appropriate solutions for business objectives. Ship high-quality, well-tested, secure, and maintainable code. Develop and maintain software designed to improve data governance and security. Troubleshoot and resolve issues related to data processing and storage. Collaborate effectively with teammates, other teams and disciplines and drive improvements in engineering. Creates and implements code for a product, service, or feature, reusing code as applicable. Contributes to efforts to break down larger work items into smaller work items and provides estimation. Troubleshooting live site issues as part of both product development and Designated Responsible Individual (DRI) during live site rotations. Remains current in skills by investing time and effort into staying abreast of latest technologies. Qualifications Required: Bachelor's or Master's degree in Computer Science, Engineering, or a related technical field. 3+ years experience in business analytics, software development, data modeling or data engineering work. Software development using languages like C#, JavaScript or Java. Experience using a variety of data stores, including data warehouses, RDBMS, in-memory caches, and document Databases. Proficiency with SQL and NoSQL and hands-on experience using distributed computing platforms. Experience developing on cloud platforms (i.e. Azure, AWS) in a continuous delivery environment. Strong problem solving, design, implementation, and communication skills. Strong intellectual curiosity and passion for learning new technologies. Preferred Qualifications: Experience with data engineering projects with firm sense of accountability and ownership. Experience in ETL/ELT, Data warehousing, data pipelines and/ or Business Intelligence Development. Experience using ML, anomaly detection, predictive analysis, exploratory data analysis. A strong understanding of the value of Data, data exploration and the benefits of a data-driven organizational culture. Business Intelligence experience or visualization with tools such as Power BI is also beneficial. Experience implementing data systems in C#/Python/Scala or similar. Working knowledge of any (or multiple) of the following tech stacks is a plus: SQL, Databricks, PySparkSQL, Azure Synapse, Azure Data Factory, Azure Fabric, or similar. Basic Knowledge of Microsoft Dynamics Platform will be an added advantage. #BICJobs Microsoft is an equal opportunity employer. Consistent with applicable law, all qualified applicants will receive consideration for employment without regard to age, ancestry, citizenship, color, family or medical care leave, gender identity or expression, genetic information, immigration status, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran or military status, race, ethnicity, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable local laws, regulations and ordinances. If you need assistance and/or a reasonable accommodation due to a disability during the application process, read more about requesting accommodations.
Posted 1 week ago
5.0 years
4 - 6 Lacs
Gurgaon
On-site
With 5 years of experience in Python, PySpark, and SQL, you will have the necessary skills to handle a variety of tasks. You will also have hands-on experience with AWS services, including Glue, EMR, Lambda, S3, EC2, and Redshift. Your work mode will be based out of the Virtusa office, allowing you to collaborate with a team of experts. Your main skills should include Scala, Kafka, PySpark, and AWS Native Data Services, as these are mandatory for the role. Additionally, having knowledge in Big Data will be a nice to have skill that will set you apart from other candidates. About Virtusa Teamwork, quality of life, professional and personal development: values that Virtusa is proud to embody. When you join us, you join a team of 27,000 people globally that cares about your growth — one that seeks to provide you with exciting projects, opportunities and work with state of the art technologies throughout your career with us. Great minds, great potential: it all comes together at Virtusa. We value collaboration and the team environment of our company, and seek to provide great minds with a dynamic place to nurture new ideas and foster excellence. Virtusa was founded on principles of equal opportunity for all, and so does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit, and business need.
Posted 1 week ago
5.0 years
4 - 6 Lacs
Gurgaon
On-site
With 5 years of experience in Python, PySpark, and SQL, you will have the necessary skills to handle a variety of tasks. You will also have hands-on experience with AWS services, including Glue, EMR, Lambda, S3, EC2, and Redshift. Your work mode will be based out of the Virtusa office, allowing you to collaborate with a team of experts. Your main skills should include Scala, Kafka, PySpark, and AWS Native Data Services, as these are mandatory for the role. Additionally, having knowledge in Big Data will be a nice to have skill that will set you apart from other candidates. About Virtusa Teamwork, quality of life, professional and personal development: values that Virtusa is proud to embody. When you join us, you join a team of 27,000 people globally that cares about your growth — one that seeks to provide you with exciting projects, opportunities and work with state of the art technologies throughout your career with us. Great minds, great potential: it all comes together at Virtusa. We value collaboration and the team environment of our company, and seek to provide great minds with a dynamic place to nurture new ideas and foster excellence. Virtusa was founded on principles of equal opportunity for all, and so does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit, and business need.
Posted 1 week ago
5.0 years
7 - 9 Lacs
Gurgaon
On-site
With 5 years of experience in Python, PySpark, and SQL, you will have the necessary skills to handle a variety of tasks. You will also have hands-on experience with AWS services, including Glue, EMR, Lambda, S3, EC2, and Redshift. Your work mode will be based out of the Virtusa office, allowing you to collaborate with a team of experts. Your main skills should include Scala, Kafka, PySpark, and AWS Native Data Services, as these are mandatory for the role. Additionally, having knowledge in Big Data will be a nice to have skill that will set you apart from other candidates. About Virtusa Teamwork, quality of life, professional and personal development: values that Virtusa is proud to embody. When you join us, you join a team of 27,000 people globally that cares about your growth — one that seeks to provide you with exciting projects, opportunities and work with state of the art technologies throughout your career with us. Great minds, great potential: it all comes together at Virtusa. We value collaboration and the team environment of our company, and seek to provide great minds with a dynamic place to nurture new ideas and foster excellence. Virtusa was founded on principles of equal opportunity for all, and so does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit, and business need.
Posted 1 week ago
5.0 years
0 Lacs
Gurgaon, Haryana, India
On-site
With 5 years of experience in Python, PySpark, and SQL, you will have the necessary skills to handle a variety of tasks. You will also have hands-on experience with AWS services, including Glue, EMR, Lambda, S3, EC2, and Redshift. Your work mode will be based out of the Virtusa office, allowing you to collaborate with a team of experts. Your main skills should include Scala, Kafka, PySpark, and AWS Native Data Services, as these are mandatory for the role. Additionally, having knowledge in Big Data will be a nice to have skill that will set you apart from other candidates.
Posted 1 week ago
4.0 years
0 Lacs
Hyderabad, Telangana, India
Remote
Overview PepsiCo operates in an environment undergoing immense and rapid change. Big-data and digital technologies are driving business transformation that is unlocking new capabilities and business innovations in areas like eCommerce, mobile experiences and IoT. The key to winning in these areas is being able to leverage enterprise data foundations built on PepsiCo’s global business scale to enable business insights, advanced analytics, and new product development. PepsiCo’s Data Management and Operations team is tasked with the responsibility of developing quality data collection processes, maintaining the integrity of our data foundations, and enabling business leaders and data scientists across the company to have rapid access to the data they need for decision-making and innovation. What PepsiCo Data Management and Operations does: Maintain a predictable, transparent, global operating rhythm that ensures always-on access to high-quality data for stakeholders across the company. Responsible for day-to-day data collection, transportation, maintenance/curation, and access to the PepsiCo corporate data asset Work cross-functionally across the enterprise to centralize data and standardize it for use by business, data science or other stakeholders. Increase awareness about available data and democratize access to it across the company. As a data engineer, you will be the key technical expert building PepsiCo's data products to drive a strong vision. You'll be empowered to create data pipelines into various source systems, rest data on the PepsiCo Data Lake, and enable exploration and access for analytics, visualization, machine learning, and product development efforts across the company. As a member of the data engineering team, you will help developing very large and complex data applications into public cloud environments directly impacting the design, architecture, and implementation of PepsiCo's flagship data products around topics like revenue management, supply chain, manufacturing, and logistics. You will work closely with process owners, product owners and business users. You'll be working in a hybrid environment with in-house, on-premises data sources as well as cloud and remote systems. Responsibilities Act as a subject matter expert across different digital projects. Oversee work with internal clients and external partners to structure and store data into unified taxonomies and link them together with standard identifiers. Manage and scale data pipelines from internal and external data sources to support new product launches and drive data quality across data products. Build and own the automation and monitoring frameworks that captures metrics and operational KPIs for data pipeline quality and performance. Responsible for implementing best practices around systems integration, security, performance, and data management. Empower the business by creating value through the increased adoption of data, data science and business intelligence landscape. Collaborate with internal clients (data science and product teams) to drive solutioning and POC discussions. Evolve the architectural capabilities and maturity of the data platform by engaging with enterprise architects and strategic internal and external partners. Develop and optimize procedures to “productionalize” data science models. Define and manage SLA’s for data products and processes running in production. Support large-scale experimentation done by data scientists. Prototype new approaches and build solutions at scale. Research in state-of-the-art methodologies. Create documentation for learnings and knowledge transfer. Create and audit reusable packages or libraries. Qualifications 4+ years of overall technology experience that includes at least 3+ years of hands-on software development, data engineering, and systems architecture. 3+ years of experience with Data Lake Infrastructure, Data Warehousing, and Data Analytics tools. 3+ years of experience in SQL optimization and performance tuning, and development experience in programming languages like Python, PySpark, Scala etc.). 2+ years in cloud data engineering experience in Azure. Fluent with Azure cloud services. Azure Certification is a plus. Experience in Azure Log Analytics Experience with integration of multi cloud services with on-premises technologies. Experience with data modelling, data warehousing, and building high-volume ETL/ELT pipelines. Experience with data profiling and data quality tools like Apache Griffin, Deequ, and Great Expectations. Experience building/operating highly available, distributed systems of data extraction, ingestion, and processing of large data sets. Experience with at least one MPP database technology such as Redshift, Synapse or Snowflake. Experience with Azure Data Factory, Azure Databricks and Azure Machine learning tools. Experience with Statistical/ML techniques is a plus. Experience with building solutions in the retail or in the supply chain space is a plus. Experience with version control systems like Github and deployment & CI tools. Working knowledge of agile development, including DevOps and DataOps concepts. B Tech/BA/BS in Computer Science, Math, Physics, or other technical fields. Skills, Abilities, Knowledge: Excellent communication skills, both verbal and written, along with the ability to influence and demonstrate confidence in communications with senior level management. Strong change manager. Comfortable with change, especially that which arises through company growth. Ability to understand and translate business requirements into data and technical requirements. High degree of organization and ability to manage multiple, competing projects and priorities simultaneously. Positive and flexible attitude to enable adjusting to different needs in an ever-changing environment. Strong organizational and interpersonal skills; comfortable managing trade-offs. Foster a team culture of accountability, communication, and self-management. Proactively drives impact and engagement while bringing others along. Consistently attain/exceed individual and team goals.
Posted 1 week ago
5.0 years
0 Lacs
Gurgaon, Haryana, India
On-site
With 5 years of experience in Python, PySpark, and SQL, you will have the necessary skills to handle a variety of tasks. You will also have hands-on experience with AWS services, including Glue, EMR, Lambda, S3, EC2, and Redshift. Your work mode will be based out of the Virtusa office, allowing you to collaborate with a team of experts. Your main skills should include Scala, Kafka, PySpark, and AWS Native Data Services, as these are mandatory for the role. Additionally, having knowledge in Big Data will be a nice to have skill that will set you apart from other candidates.
Posted 1 week ago
0 years
0 Lacs
Greater Nashik Area
On-site
Dreaming big is in our DNA. It’s who we are as a company. It’s our culture. It’s our heritage. And more than ever, it’s our future. A future where we’re always looking forward. Always serving up new ways to meet life’s moments. A future where we keep dreaming bigger. We look for people with passion, talent, and curiosity, and provide them with the teammates, resources and opportunities to unleash their full potential. The power we create together – when we combine your strengths with ours – is unstoppable. Are you ready to join a team that dreams as big as you do? AB InBev GCC was incorporated in 2014 as a strategic partner for Anheuser-Busch InBev. The center leverages the power of data and analytics to drive growth for critical business functions such as operations, finance, people, and technology. The teams are transforming Operations through Tech and Analytics. Do You Dream Big? We Need You. Job Description Job Title: Data Scientist – GBS Commercial Location: Bangalore Reporting to: Senior Manager – GBS Commercial Purpose of the role We are looking for a Data Scientist to analyze large amounts of raw information to find patterns that will help improve our company. We will rely on you to build data products to extract valuable business insights. In this role, you should be highly analytical with a knack for analysis, math and statistics. Critical thinking and problem-solving skills are essential for interpreting data. We also want to see a passion for machine-learning and research. Key tasks & accountabilities Identify valuable data sources and automate collection processes Undertake preprocessing of structured and unstructured data Analyze large amounts of information to discover trends and patterns Build predictive models and machine-learning algorithms Combine models through ensemble modeling Present information using data visualization techniques Propose solutions and strategies to business challenges Collaborate with engineering and product development teams Qualifications, Experience, Skills Level of educational attainment required: BSc/BA in Computer Science, Engineering or relevant field; graduate degree in Data Science or other quantitative field is preferred Previous work experience required: Proven experience as a Data Scientist or Data Analyst Experience in data mining Understanding of machine-learning and operations research Technical skills required: Knowledge of R, SQL and Python; familiarity with Scala, Java or C++ is an asset Experience using business intelligence tools (e.g. PowerBI) and data frameworks (e.g. Hadoop) Analytical mind and business acumen Strong math skills (e.g. statistics, algebra) And above all of this, an undying love for beer! We dream big to create future with more cheers.
Posted 1 week ago
0 years
0 Lacs
Andhra Pradesh
On-site
We are seeking a highly skilled and motivated Big Data Engineer to join our data engineering team. The ideal candidate will have hands-on experience with Hadoop ecosystem, Apache Spark, and programming expertise in Python (PySpark), Scala, and Java. You will be responsible for designing, developing, and optimizing scalable data pipelines and big data solutions to support analytics and business intelligence initiatives. About Virtusa Teamwork, quality of life, professional and personal development: values that Virtusa is proud to embody. When you join us, you join a team of 27,000 people globally that cares about your growth — one that seeks to provide you with exciting projects, opportunities and work with state of the art technologies throughout your career with us. Great minds, great potential: it all comes together at Virtusa. We value collaboration and the team environment of our company, and seek to provide great minds with a dynamic place to nurture new ideas and foster excellence. Virtusa was founded on principles of equal opportunity for all, and so does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit, and business need.
Posted 1 week ago
7.0 - 12.0 years
15 - 30 Lacs
Hyderabad
Work from Office
We are hiring a Senior Kafka Developer for an immediate-15 Days joiner. Job Title: Senior Kafka Developer Location: Hyderabad Roles and Responsibilities: Looking for a Kafka developer using Java client APIs. Should have strong programming skills in Java and Scala. Must have knowledge of Kafka architecture, concepts, and best practices. Must be conversant with concepts like data ingestion, transformation, streaming and distribution. Solid understanding of RESTful APIs and web services. Knowledge of building and maintaining microservices with Kafka as a messaging platform. Proficiency in working with NoSQL databases, particularly Cassandra DB. Experience with Cloud Platforms like AWS Experience with managed services like Confluent Kafka , Amazon MSK. Also directly working with Apache Kafka Qualifications and Education Requirements: B.Tech, M.Tech,MCA Experience: 7+ Years
Posted 1 week ago
4.0 - 9.0 years
11 - 17 Lacs
Bengaluru
Work from Office
Greetings from TSIT Digital !! This is with regard to an excellent opportunity with us and if you have that unique and unlimited passion for building world-class enterprise software products that turn into actionable intelligence, then we have the right opportunity for you and your career. This is an opportunity for Permanent Employment with TSIT Digital. What are we looking for: Data Engineer Experience: 4+ Year's Relevant Experience 2-5 Years Location:Bangalore Notice period: Immediately to 15 days Job Description: Work location-Manyata Tech Park, Bengaluru, Karnataka, India Work mode- Hybrid Model Client- Lowes Mandatory Skills: Data Engineer Scala/Python, SQL,Scripting Knowledge on BigQuery, Pyspark, Airflow,Serverless Cloud Native Service, Kafka Streaming If you are interested please share your updated CV:- kousalya.v@tsit.co.in
Posted 1 week ago
3.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : Scala, PySpark Minimum 3 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As an Application Developer, you will engage in the design, construction, and configuration of applications tailored to fulfill specific business processes and application requirements. Your typical day will involve collaborating with team members to understand project needs, developing innovative solutions, and ensuring that applications are optimized for performance and usability. You will also participate in testing and debugging processes to guarantee the quality of the applications you create, while continuously seeking ways to enhance functionality and user experience. Roles & Responsibilities: - Expected to perform independently and become an SME. - Required active participation/contribution in team discussions. - Contribute in providing solutions to work related problems. - Collaborate with cross-functional teams to gather requirements and translate them into technical specifications. - Conduct thorough testing and debugging of applications to ensure optimal performance and reliability. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform. - Good To Have Skills: Experience with PySpark, Scala. - Strong understanding of data integration and ETL processes. - Familiarity with cloud computing concepts and services. - Experience in application lifecycle management and agile methodologies. Additional Information: - The candidate should have minimum 3 years of experience in Databricks Unified Data Analytics Platform. - This position is based at our Chennai office. - A 15 years full time education is required.
Posted 1 week ago
15.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Description RESPONSIBILITIES Strategy Providing a proactive, agile and adaptive Transacting Monitoring Solution that ensures the highest level of compliance in financial crime mitigation. To empower our teams to detect, prevent and proactive risk management using advanced data and analytics, safeguarding our clients and the bank. Business Drive the initiatives as a Product Owner to Design, Develop and deploy various solutions for Transaction Monitoring covering CASA, Trade Fraud and Trade AML, FM Third Party payments and FM AML etc and act as a Point of Contact for assigned AML Transaction Monitoring initiatives. Lead, mentor and manage the GBS teams ensuring alignment with FCSO TM goals and values. Strategically allocate resources to meet operational and project needs, ensuring technical skills are aligned with the deliverables. Support for analytical processes to enhance TM red flag monitoring especially detection scenarios and Optimising cases for investigation through AI/ML models and analytical processes. Improve processes such as threshold tuning, reconciliation, segmentation, optimisation etc. associated with the Transaction Monitoring function across products such as CASA, Trade, Credit Cards, Securities and Financial Markets. Support in design of different scenarios for ML detection, model development and validation including data quality validation, model effectiveness, rules logic for effective risk analytics. Design dashboards and presentations for the senior management and carryout Program management related activity within the Transaction Monitoring Solutions team. Processes The role requires strategic thinking and technical expert with a strong background in financial crime detection and prevention, specifically using advanced analytical methodologies. This role require hands-on expertise to design, develop and deploy analytics/models to detect suspicious activities and financial crime. The ideal candidate will possess leadership and technical expertise, a strategic mindset for enhancing Transaction Monitoring effectiveness, and good familiarity with the compliance regulations in financial sector. The role holder is accountable for ensuring a strong connection between the teams and key stakeholders, communication both the technical and operational updates. Knowledge of core banking, payment, CDD, securities and other systems and interplay/linkages between them. Understand business domain aspects relevant to AML Monitoring (MANTAS, Quantexa) and Case Management (ECM). Able to conceptualize, design, support and align relevant processes and controls to industry best practice, and close out any compliance gaps. Mentoring and conducting training programs to bring the new joiners and team up to speed on new business requirements. Provide endorsement for changes or remediation activities impacting TM Solutions. Also engaging with relevant stakeholders for deploying the changes to production Mitigate risks by ensuring robust system configuration, process and monitoring standards Work towards the collective objectives and scorecard of the business function published from time to time in the form of job and performance objectives for defined period People and Talent Provide coaching to peers and new hires to ensure they are highly engaged and performing to their potential. Promote and embed a culture of openness, trust and risk awareness, where ethical, legal, regulatory and policy compliant conduct is the norm. Stimulate an environment where forward planning, prioritisation, deadline management and streamlined workflows and collaborative, inclusive yet effective and efficient work practices are the norm. Foster a collaborative and inclusive team culture that emphasizes innovation, accountability and technical excellence. Risk Management Apply Group and FCC policies and processes (AML monitoring) to manage risks. Apply risk and data analytic tools/techniques to optimise and tune relevant detection scenarios, and screening and monitoring optimisation solutions. Provide typology review coverage based on relevant segments/products and validate appropriate monitoring coverage which is fit-for-purpose. Liaise with Business / Segment stakeholders to understand the emanating risks and ensuring those are suitably addressed through the Monitoring coverage. Engage Advisory teams on Product Risk Assessment reviews, outlining transaction monitoring coverage against products and rationale for deviations. Ensure appropriate and valid agreements are in place for consumption of product and segment data for Transaction Monitoring Make recommendations (and support in implementation where required) to relevant stakeholders on possible risk mitigants to identified risks or areas of concerns from TM Solutions Provide Transaction Monitoring subject matter expertise on outcome of AML Risk Identification and Assessment Methodologies Extend support in the implementation of control improvements, enhancements or simplifications proposed by relevant CFCC Advisory functions. Provide guidance in understanding technical and AML detection related aspects of Transaction Monitoring systems pertinent to a country. Collaborate with FCC Advisory teams on determining risk tolerances. Strong interpersonal skills to collaborate effectively with cross-functional teams. Governance Attend relevant business / segment / product related working group meetings. Ensure tracking and remediation of surveillance and investigations related regulatory findings. Report product and segment related matters impacting monitoring in relevant FCC and Upstream Governance Committees Provide regular progress updates on agreed mitigation actions concerning TM Design and Product Issues and enhancements Regulatory & Business Conduct Display exemplary conduct and live by the Group’s Values and Code of Conduct. Take personal responsibility for embedding the highest standards of ethics, including regulatory and business conduct, across Standard Chartered Bank. This includes understanding and ensuring compliance with, in letter and spirit, all applicable laws, regulations, guidelines and the Group Code of Conduct. As a people leader contributor to the FCSO TM Solutions to achieve the outcomes set out in the Bank’s Conduct Principles Effectively and collaboratively identify, escalate, mitigate and resolve risk, conduct and compliance matters. Key Stakeholders Relevant Business teams – CIB & WRB Business and segment CFCC Advisory teams Group AML RFO / teams ITO Country/Regional FCC teams Audit / Assurance teams Group Model Validation/ Group Model Risk Team Other Responsibilities Embed Here for good and Group’s brand and values in team. Perform other responsibilities assigned under Group, Country, Business or Functional policies and procedures. Qualificatio Education Post Graduate degree in Management/Statistics/Mathematics OR Graduate degree in Engineering from a reputed institution. Training 15+ years of hands on experience in Transaction Monitoring with atleast 5 years focussed on financial crime threat mitigation tools and platforms Exceptional analytical, problem-solving, and decision-making abilities with a focus on technical solutions Proficiency in agile methodologies, technical roadmaps, and DevOps practices. Experience as a Product Owner to manage Transaction Monitoring tools to mitigate financial crime risk is essential Experience on Project Management skills and presentation skills is essential. Certifications Certification from the ACAMS - Association of Certified Anti-Money Laundering Specialists or equivalent is preferred. Certification on the Project Management is an added advantage. Languages English Role Specific Technical Competencies Data Science Compliance Advisory Manage Risk Surveillance (Specifically Monitoring) Statistical Modelling/ Machine Learning/ Data Analysis SQL / HQL / Hive / Hadoop scripting and databases such as Oracle and HaaS R / Python / SAS / STATA / C++ / SCALA Programming Strong coding skills in Python, R, SQL and familiarity with data engineering practices for model integration. Familiarity with databases such as Oracle, SQL server Expertise in creating dashboards and reports using tools such as Power BI, Tableau Experience in integrating TM systems with core banking platforms and data warehouses with familiarity of cloud platforms for scalable TM solutions. About Standard Chartered We're an international bank, nimble enough to act, big enough for impact. For more than 170 years, we've worked to make a positive difference for our clients, communities, and each other. We question the status quo, love a challenge and enjoy finding new opportunities to grow and do better than before. If you're looking for a career with purpose and you want to work for a bank making a difference, we want to hear from you. You can count on us to celebrate your unique talents and we can't wait to see the talents you can bring us. Our purpose, to drive commerce and prosperity through our unique diversity, together with our brand promise, to be here for good are achieved by how we each live our valued behaviours. When you work with us, you'll see how we value difference and advocate inclusion. Together We Do the right thing and are assertive, challenge one another, and live with integrity, while putting the client at the heart of what we do Never settle, continuously striving to improve and innovate, keeping things simple and learning from doing well, and not so well Are better together, we can be ourselves, be inclusive, see more good in others, and work collectively to build for the long term Profile Description Standard Chartered Bank We Offer What we offer In line with our Fair Pay Charter, we offer a competitive salary and benefits to support your mental, physical, financial and social wellbeing. Core bank funding for retirement savings, medical and life insurance, with flexible and voluntary benefits available in some locations. Time-off including annual leave, parental/maternity (20 weeks), sabbatical (12 months maximum) and volunteering leave (3 days), along with minimum global standards for annual and public holiday, which is combined to 30 days minimum. Flexible working options based around home and office locations, with flexible working patterns. Proactive wellbeing support through Unmind, a market-leading digital wellbeing platform, development courses for resilience and other human skills, global Employee Assistance Programme, sick leave, mental health first-aiders and all sorts of self-help toolkits A continuous learning culture to support your growth, with opportunities to reskill and upskill and access to physical, virtual and digital learning. Being part of an inclusive and values driven organisation, one that embraces and celebrates our unique diversity, across our teams, business functions and geographies - everyone feels respected and can realise their full potential.
Posted 1 week ago
6.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Description Data Engineer - Azure Databricks, Pyspark, Python, Airflow __Chennai/Pune India ( 6- 10 years exp only) YOU’LL BUILD TECH THAT EMPOWERS GLOBAL BUSINESSES Our Connect Technology teams are working on our new Connect platform, a unified, global, open data ecosystem powered by Microsoft Azure. Our clients around the world rely on Connect data and insights to innovate and grow. As a Junior Data Engineer, you’ll be part of a team of smart, highly skilled technologists who are passionate about learning and supporting cutting-edge technologies such as Spark, Scala, Pyspark, Databricks, Airflow, SQL, Docker, Kubernetes, and other Data engineering tools. These technologies are deployed using DevOps pipelines leveraging Azure, Kubernetes, Jenkins and Bitbucket/GIT Hub. Responsibilities Develop, test, troubleshoot, debug, and make application enhancements leveraging, Spark , Pyspark, Scala, Pandas, Databricks, Airflow, SQL as the core development technologies. Deploy application components using CI/CD pipelines. Build utilities for monitoring and automating repetitive functions. Collaborate with Agile cross-functional teams - internal and external clients including Operations, Infrastructure, Tech Ops Collaborate with Data Science team and productionize the ML Models. Participate in a rotational support schedule to provide responses to customer queries and deploy bug fixes in a timely and accurate manner. Qualifications 6-10 Years of years of applicable software engineering experience Strong fundamentals with experience in Bigdata technologies, Spark, Pyspark, Scala, Pandas, Databricks, Airflow, SQL, Must have experience in cloud technologies, preferably Microsoft Azure. Must have experience in performance optimization of Spark workloads. Good to have experience with DevOps Technologies as GIT Hub, Kubernetes, Jenkins, Docker. Good to have knowledge in Snowflakes Good to have knowledge of relational databases, preferably PostgreSQL. Excellent English communication skills, with the ability to effectively interface across cross-functional technology teams and the business Minimum B.S. degree in Computer Science, Computer Engineering or related field Additional Information Enjoy a flexible and rewarding work environment with peer-to-peer recognition platforms. Recharge and revitalize with help of wellness plans made for you and your family. Plan your future with financial wellness tools. Stay relevant and upskill yourself with career development opportunities Our Benefits Flexible working environment Volunteer time off LinkedIn Learning Employee-Assistance-Program (EAP) About NIQ NIQ is the world’s leading consumer intelligence company, delivering the most complete understanding of consumer buying behavior and revealing new pathways to growth. In 2023, NIQ combined with GfK, bringing together the two industry leaders with unparalleled global reach. With a holistic retail read and the most comprehensive consumer insights—delivered with advanced analytics through state-of-the-art platforms—NIQ delivers the Full View™. NIQ is an Advent International portfolio company with operations in 100+ markets, covering more than 90% of the world’s population. For more information, visit NIQ.com Want to keep up with our latest updates? Follow us on: LinkedIn | Instagram | Twitter | Facebook Our commitment to Diversity, Equity, and Inclusion NIQ is committed to reflecting the diversity of the clients, communities, and markets we measure within our own workforce. We exist to count everyone and are on a mission to systematically embed inclusion and diversity into all aspects of our workforce, measurement, and products. We enthusiastically invite candidates who share that mission to join us. We are proud to be an Equal Opportunity/Affirmative Action-Employer, making decisions without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability status, age, marital status, protected veteran status or any other protected class. Our global non-discrimination policy covers these protected classes in every market in which we do business worldwide. Learn more about how we are driving diversity and inclusion in everything we do by visiting the NIQ News Center: https://nielseniq.com/global/en/news-center/diversity-inclusion
Posted 1 week ago
3.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Description Data Engineer - Azure Databricks, Pyspark, Python, Airflow __Chennai/Pune India ( 3- 6 years exp only) YOU’LL BUILD TECH THAT EMPOWERS GLOBAL BUSINESSES Our Connect Technology teams are working on our new Connect platform, a unified, global, open data ecosystem powered by Microsoft Azure. Our clients around the world rely on Connect data and insights to innovate and grow. As a Junior Data Engineer, you’ll be part of a team of smart, highly skilled technologists who are passionate about learning and supporting cutting-edge technologies such as Spark, Scala, Pyspark, Databricks, Airflow, SQL, Docker, Kubernetes, and other Data engineering tools. These technologies are deployed using DevOps pipelines leveraging Azure, Kubernetes, Jenkins and Bitbucket/GIT Hub. Responsibilities Develop, test, troubleshoot, debug, and make application enhancements leveraging, Spark , Pyspark, Scala, Pandas, Databricks, Airflow, SQL as the core development technologies. Deploy application components using CI/CD pipelines. Build utilities for monitoring and automating repetitive functions. Collaborate with Agile cross-functional teams - internal and external clients including Operations, Infrastructure, Tech Ops Collaborate with Data Science team and productionize the ML Models. Participate in a rotational support schedule to provide responses to customer queries and deploy bug fixes in a timely and accurate manner. Qualifications 3-6 Years of years of applicable software engineering experience Strong fundamentals with experience in Bigdata technologies, Spark, Pyspark, Scala, Pandas, Databricks, Airflow, SQL, Must have experience in cloud technologies, preferably Microsoft Azure. Must have experience in performance optimization of Spark workloads. Good to have experience with DevOps Technologies as GIT Hub, Kubernetes, Jenkins, Docker. Good to have knowledge in Snowflakes Good to have knowledge of relational databases, preferably PostgreSQL. Excellent English communication skills, with the ability to effectively interface across cross-functional technology teams and the business Minimum B.S. degree in Computer Science, Computer Engineering or related field Additional Information Enjoy a flexible and rewarding work environment with peer-to-peer recognition platforms. Recharge and revitalize with help of wellness plans made for you and your family. Plan your future with financial wellness tools. Stay relevant and upskill yourself with career development opportunities Our Benefits Flexible working environment Volunteer time off LinkedIn Learning Employee-Assistance-Program (EAP) About NIQ NIQ is the world’s leading consumer intelligence company, delivering the most complete understanding of consumer buying behavior and revealing new pathways to growth. In 2023, NIQ combined with GfK, bringing together the two industry leaders with unparalleled global reach. With a holistic retail read and the most comprehensive consumer insights—delivered with advanced analytics through state-of-the-art platforms—NIQ delivers the Full View™. NIQ is an Advent International portfolio company with operations in 100+ markets, covering more than 90% of the world’s population. For more information, visit NIQ.com Want to keep up with our latest updates? Follow us on: LinkedIn | Instagram | Twitter | Facebook Our commitment to Diversity, Equity, and Inclusion NIQ is committed to reflecting the diversity of the clients, communities, and markets we measure within our own workforce. We exist to count everyone and are on a mission to systematically embed inclusion and diversity into all aspects of our workforce, measurement, and products. We enthusiastically invite candidates who share that mission to join us. We are proud to be an Equal Opportunity/Affirmative Action-Employer, making decisions without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability status, age, marital status, protected veteran status or any other protected class. Our global non-discrimination policy covers these protected classes in every market in which we do business worldwide. Learn more about how we are driving diversity and inclusion in everything we do by visiting the NIQ News Center: https://nielseniq.com/global/en/news-center/diversity-inclusion
Posted 1 week ago
9.0 - 13.0 years
0 Lacs
pune, maharashtra
On-site
Sonatus is a well-funded, fast-paced, and rapidly growing company dedicated to helping automakers develop dynamic software-defined vehicles through innovative software products and solutions. With an impressive track record of over four million vehicles already equipped with our cutting-edge technology, we are at the forefront of automotive digital transformation. Our team at Sonatus comprises a diverse group of talented technology and automotive specialists from leading companies in the industry. As a Staff Cloud Backend Engineer at Sonatus, you will play a crucial role within our cloud engineering team. Your responsibilities will include leading the design, development, and deployment of scalable backend services and APIs for cloud-based applications. You will be tasked with architecting and implementing microservices and serverless solutions using languages like Go, ensuring high performance, scalability, and resilience of backend services. Additionally, you will be responsible for optimizing database systems, scaling backend infrastructure to handle increased loads, and implementing security best practices. To excel in this role, you should possess a Bachelor's or Master's degree in Computer Science or a related field, along with at least 9 years of experience in backend development. Your expertise should extend to cloud platforms such as AWS, Azure, or Google Cloud, as well as proficiency in backend programming languages and microservices architecture. Strong knowledge of cloud infrastructure, containerization, orchestration, RESTful and GraphQL APIs, and database systems is essential. Experience with distributed systems and leadership abilities to mentor engineering teams are also key requirements. At Sonatus, we value innovation and are looking for team members who are passionate about driving change. If you are ready to advance your career in a dynamic and forward-thinking environment, we encourage you to apply and be a part of our innovative team. Please note that Sonatus, Inc. does not accept unsolicited agency resumes and is not responsible for any fees associated with unsolicited activities.,
Posted 1 week ago
3.0 years
0 Lacs
Bengaluru, Karnataka
On-site
Tesco India • Bengaluru, Karnataka, India • Hybrid • Full-Time • Permanent • Apply by 01-Aug-2025 About the role Refer to responsibilities What is in it for you At Tesco, we are committed to providing the best for you. As a result, our colleagues enjoy a unique, differentiated, market- competitive reward package, based on the current industry practices, for all the work they put into serving our customers, communities and planet a little better every day. Our Tesco Rewards framework consists of pillars - Fixed Pay, Incentives, and Benefits. Total Rewards offered at Tesco is determined by four principles - simple, fair, competitive, and sustainable. Salary - Your fixed pay is the guaranteed pay as per your contract of employment. Performance Bonus - Opportunity to earn additional compensation bonus based on performance, paid annually Leave & Time-off - Colleagues are entitled to 30 days of leave (18 days of Earned Leave, 12 days of Casual/Sick Leave) and 10 national and festival holidays, as per the company’s policy. Making Retirement Tension-FreeSalary - In addition to Statutory retirement beneets, Tesco enables colleagues to participate in voluntary programmes like NPS and VPF. Health is Wealth - Tesco promotes programmes that support a culture of health and wellness including insurance for colleagues and their family. Our medical insurance provides coverage for dependents including parents or in-laws. Mental Wellbeing - We offer mental health support through self-help tools, community groups, ally networks, face-to-face counselling, and more for both colleagues and dependents. Financial Wellbeing - Through our financial literacy partner, we offer one-to-one financial coaching at discounted rates, as well as salary advances on earned wages upon request. Save As You Earn (SAYE) - Our SAYE programme allows colleagues to transition from being employees to Tesco shareholders through a structured 3-year savings plan. Physical Wellbeing - Our green campus promotes physical wellbeing with facilities that include a cricket pitch, football field, badminton and volleyball courts, along with indoor games, encouraging a healthier lifestyle. You will be responsible for Job Summary: Build solutions for the real-world problems in workforce management for retail. You will work with a team of highly skilled developers and product managers throughout the entire software development life cycle of the products we own. In this role you will be responsible for designing, building, and maintaining our big data pipelines. Your primary focus will be on developing data pipelines using available tec hnologies. In this job, I’m accountable for: Following our Business Code of Conduct and always acting with integrity and due diligence and have these specific risk responsibilities: Represent Talent Acquisition in all forums/ seminars pertaining to process, compliance and audit Perform other miscellaneous duties as required by management Driving CI culture, implementing CI projects and innovation for withing the team Design and implement scalable and reliable data processing pipelines using Spark/Scala/Python &Hadoop ecosystem. Develop and maintain ETL processes to load data into our big data platform. Optimize Spark jobs and queries to improve performance and reduce processing time. Working with product teams to communicate and translate needs into technical requirements. Design and develop monitoring tools and processes to ensure data quality and availability. Collaborate with other teams to integrate data processing pipelines into larger systems. Delivering high quality code and solutions, bringing solutions into production. Performing code reviews to optimise technical performance of data pipelines. Continually look for how we can evolve and improve our technology, processes, and practices. Leading group discussions on system design and architecture. Manage and coach individuals, providing regular feedback and career development support aligned with business goals. Allocate and oversee team workload effectively, ensuring timely and high-quality outputs. Define and streamline team workflows, ensuring consistent adherence to SLAs and data governance practices. Monitor and report key performance indicators (KPIs) to drive continuous improvement in delivery efficiency and system uptime. Oversee resource allocation and prioritization, aligning team capacity with project and business demands. Key people and teams I work with in and outside of Tesco: People, budgets and other resources I am accountable for in my job: TBS & Tesco Senior Management TBS Reporting Team Tesco UK / ROI/ Central Europe Any other accountabilities by the business Business stakeholders Operational skills relevant for this job: Experience relevant for this job: Skills: ETL, YARN,Spark, Hive,Hadoop,PySpark/Python 7+ years of experience inbuilding and maintaining big data (anyone) Linux/Unix/Shell environments(anyone), Query platforms using Spark/Scala. optimisation Strong knowledge of distributed computing principles and big Good to have: Kafka, restAPI/reporting tools. data technologies such as Hadoop, Spark, Streaming etc. Experience with ETL processes and data modelling. Problem-solving and troubleshooting skills. Working knowledge on Oozie/Airflow. Experience in writing unit test cases, shell scripting. Ability to work independently and as part of a team in a fast-paced environment. You will need Refer to responsibilities About us Tesco in Bengaluru is a multi-disciplinary team serving our customers, communities, and planet a little better every day across markets. Our goal is to create a sustainable competitive advantage for Tesco by standardising processes, delivering cost savings, enabling agility through technological solutions, and empowering our colleagues to do even more for our customers. With cross-functional expertise, a wide network of teams, and strong governance, we reduce complexity, thereby offering high-quality services for our customers. Tesco in Bengaluru, established in 2004 to enable standardisation and build centralised capabilities and competencies, makes the experience better for our millions of customers worldwide and simpler for over 3,30,000 colleagues. Tesco Business Solutions: Established in 2017, Tesco Business Solutions (TBS) has evolved from a single entity traditional shared services in Bengaluru, India (from 2004) to a global, purpose-driven solutions-focused organisation. TBS is committed to driving scale at speed and delivering value to the Tesco Group through the power of decision science. With over 4,400 highly skilled colleagues globally, TBS supports markets and business units across four locations in the UK, India, Hungary, and the Republic of Ireland. The organisation underpins everything that the Tesco Group does, bringing innovation, a solutions mindset, and agility to its operations and support functions, building winning partnerships across the business. TBS's focus is on adding value and creating impactful outcomes that shape the future of the business. TBS creates a sustainable competitive advantage for the Tesco Group by becoming the partner of choice for talent, transformation, and value creation.
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39817 Jobs | Dublin
Wipro
19388 Jobs | Bengaluru
Accenture in India
15458 Jobs | Dublin 2
EY
14907 Jobs | London
Uplers
11185 Jobs | Ahmedabad
Amazon
10459 Jobs | Seattle,WA
IBM
9256 Jobs | Armonk
Oracle
9226 Jobs | Redwood City
Accenture services Pvt Ltd
7971 Jobs |
Capgemini
7704 Jobs | Paris,France