Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
3.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
You Lead the Way. We’ve Got Your Back. With the right backing, people and businesses have the power to progress in incredible ways. When you join Team Amex, you become part of a global and diverse community of colleagues with an unwavering commitment to back our customers, communities and each other. Here, you’ll learn and grow as we help you create a career journey that’s unique and meaningful to you with benefits, programs, and flexibility that support you personally and professionally. At American Express, you’ll be recognized for your contributions, leadership, and impact—every colleague has the opportunity to share in the company’s success. Together, we’ll win as a team, striving to uphold our company values and powerful backing promise to provide the world’s best customer experience every day. And we’ll do it with the utmost integrity, and in an environment where everyone is seen, heard and feels like they belong. Join Team Amex and let's lead the way together. Responsibilities include, but not limited to: The Ideal candidate will be responsible for Designing, Developing and maintaining data pipelines. Serving as a core member of an agile team that drives user story analysis and elaboration, designs and develops responsive web applications using the best engineering practices You will closely work with data scientists, analysts and other partners to ensure the flawless flow of data. You will be Building and optimize reports for analytical and business purpose. Monitor and solve data pipelines issues to ensure smooth operation. Implementing data quality checks and validation process to ensure the accuracy completeness and consistency of data Implementing data governance policies , access controls , and security measures to protect critical data and ensure compliance. Developing deep understanding of integrations with other systems and platforms within the supported domains. Bring a culture of innovation, ideas, and continuous improvement. Challenging status quo, demonstrate risk taking, and implement creative ideas Lead your own time, and work well both independently and as part of a team. Adopt emerging standards while promoting best practices and consistent framework usage. Work with Product Owners to define requirements for new features and plan increments of work. Minimum Qualifications BS or MS degree in computer science, computer engineering, or other technical subject area or equivalent 3+ years of work experience 3-12 Years At least 5 year of hands-on experience with SQL, including schema design, query optimization and performance tuning. Experience with distributed computing frameworks like Hadoop,Hive,Spark for processing large scale data sets. Proficiency in any of the programming language python, pyspark for building data pipeline and automation scripts. Understanding of cloud computing and exposure to any cloud GCP,AWS or Azure. knowledge of CICD, GIT commands and deployment process. Strong analytical and problem-solving skills, with the ability to troubleshoot complex data issues and optimize data processing workflows Excellent communication and collaboration skills. We back our colleagues and their loved ones with benefits and programs that support their holistic well-being. That means we prioritize their physical, financial, and mental health through each stage of life. Benefits include: Competitive base salaries Bonus incentives Support for financial-well-being and retirement Comprehensive medical, dental, vision, life insurance, and disability benefits (depending on location) Flexible working model with hybrid, onsite or virtual arrangements depending on role and business need Generous paid parental leave policies (depending on your location) Free access to global on-site wellness centers staffed with nurses and doctors (depending on location) Free and confidential counseling support through our Healthy Minds program Career development and training opportunities American Express is an equal opportunity employer and makes employment decisions without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran status, disability status, age, or any other status protected by law. Offer of employment with American Express is conditioned upon the successful completion of a background verification check, subject to applicable laws and regulations. Show more Show less
Posted 2 weeks ago
2.0 years
0 Lacs
Hyderābād
On-site
- 2+ years of processing data with a massively parallel technology (such as Redshift, Teradata, Netezza, Spark or Hadoop based big data solution) experience - 2+ years of relational database technology (such as Redshift, Oracle, MySQL or MS SQL) experience - 2+ years of developing and operating large-scale data structures for business intelligence analytics (using ETL/ELT processes) experience - 5+ years of data engineering experience - Experience managing a data or BI team - Experience communicating to senior management and customers verbally and in writing - Experience leading and influencing the data or BI strategy of your team or organization - Experience in at least one modern scripting or programming language, such as Python, Java, Scala, or NodeJS Do you pioneer? Do you enjoy solving complex problems in building and analyzing large datasets? Do you enjoy focusing first on your customer and working backwards? Amazon transportation controllership team is looking for an experienced Data Engineering Manager with experience in architecting large/complex data systems with a strong record of achieving results, scoping and delivering large projects end-to-end. You will be the key driver in building out our vision for scalable data systems to support the ever-growing Amazon global transportation network businesses. Key job responsibilities As a Data Engineering Manager in Transportation Controllership, you will be at the forefront of managing large projects, providing vision to the team, designing and planning large financial data systems that will allow our businesses to scale world-wide. You should have deep expertise in the database design, management, and business use of extremely large datasets, including using AWS technologies - Redshift, S3, EC2, Data-pipeline and other big data technologies. Above all you should be passionate about data warehousing large datasets together to answer business questions and drive change. You should have excellent business acumen and communication skills to be able to work with multiple business teams, and be comfortable communicating with senior leadership. Due to the breadth of the areas of business, you will coordinate across many internal and external teams, and provide visibility to the senior leaders of the company with your strong written and oral communication skills. We need individuals with demonstrated ability to learn quickly, think big, execute both strategically and tactically, motivate and mentor their team to deliver business values to our customers on time. A day in the life On a daily basis you will: • manage and help GROW a team of high performing engineers • understand new business requirements and architect data engineering solutions for the same • plan your team's priorities, working with relevant internal/external stakeholders, including sprint planning • resolve impediments faced by the team • update leadership as needed • use judgement in making the right tactical and strategic decisions for the team and organization • monitor health of the databases and ingestion pipelines Experience with big data technologies such as: Hadoop, Hive, Spark, EMR Experience with AWS Tools and Technologies (Redshift, S3, EC2) Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.
Posted 2 weeks ago
5.0 - 8.0 years
6 - 9 Lacs
Hyderābād
On-site
About the Role: Grade Level (for internal use): 09 The Team: We are looking for a highly motivated Engineer to join our team supporting Marketplace Platform. S&P Global Marketplace technology team consists of geographically diversified software engineers responsible to develop scalable solutions by working directly with product development team. Our team culture is oriented towards equality in the realm of software engineering irrespective of hierarchy promoting innovation. One should feel empowered to iterate over ideas and experimentation without having fear of failure. Impact: You will enable S&P business to showcase our proprietary S&P Global data, combine it with “curated” alternative data, further enrich it with value-add services from Kensho and others, and deliver it via the clients’ channel of choice to help them make better investment and business decisions, with confidence. What you can expect: An unmatched experience in handling huge volumes of data, analytics, visualization, and services over cloud technologies along with appreciation in product development life cycle to convert an idea into revenue generating stream. Responsibilities: We are looking for a self-motivated, enthusiastic and passionate software engineer to develop technology solutions for S&P global marketplace product. The ideal candidate thrives in a highly technical role and will design and develop software using cutting edge technologies consisting of web applications, data pipelines, big data, machine learning and multi-cloud. The development is already underway so the candidate would be expected to get up to speed very quickly & start contributing. Experience implementing: Web Services (with WCF, RESTful JSON, SOAP, TCP), Windows Services, and Unit Tests . Have past experience working with AWS, Azure DevOps, Jenkins, Docker, Kubernetes/EKS, Ansible and Prometheus or related cloud technologies. Have good understanding of single, hybrid and multicloud architecture with preferably hands-on experience. Active participation in all scrum ceremonies, follow AGILE best practices effectively. Play a key role in the development team to build high-quality, high-performance, scalable code . Produce technical design documents and conduct technical walkthrough. Document and demonstrate solutions using technical design docs, diagrams and stubbed code . Collaborate effectively with technical and non-technical stakeholders . Respond to and resolve production issues. What we are looking for: Minimum of 5-8 years of significant experience in application development. Proficient with software development lifecycle (SDLC) methodologies like Agile, Test-driven development. Experience working with high volume data and computationally intensive system. Garbage collection friendly programming experience - tuning java garbage collection & performance is a must. Proficiency in the development environment, including IDE, web & application server, GIT, Continuous Integration, unit-testing tool and defect management tools . Domain knowledge in Financial Industry and Capital Markets is a plus. Excellent communication skills are essential, with strong verbal and writing proficiencies. Mentor teams, innovate and experiment, give face to business ideas and present to key stakeholders. Required technical skills: Build data pipelines . Utilize platforms like snowflake, talend, databricks etc. Utilize cloud managed services like AWS Step functions, AWS Lambda, AWS DynamoDB Develop custom solutions using Apache nifi, Airflow, Spark, Kafka, Hive, and/or Spring Cloud Data Flow . Develop federated data services to provide scalable and performant data APIs, REST, GraphQL, OData . Write infrastructure as code to develop sandbox environments . Provide analytical capabilities using BI tools like tableau, power BI etc. Feed data at scale to clients that are geographically distributed . Experience building sophisticated and highly automated infrastructure. Experience with automation tools such as terraform, Cloud technologies, cloud formation, ansible etc., Demonstrates ability to adapt to new technologies and learn quickly. Desirable technical skills: Java, Springboot, React, HTML/CSS, API development, micro-services pattern, cloud technologies and managed services preferably AWS, Big Data and Analytics, Relational databases preferably Postgresql, NoSql databases. About S&P Global Market Intelligence At S&P Global Market Intelligence, a division of S&P Global we understand the importance of accurate, deep and insightful information. Our team of experts delivers unrivaled insights and leading data and technology solutions, partnering with customers to expand their perspective, operate with confidence, and make decisions with conviction. For more information, visit www.spglobal.com/marketintelligence . What’s In It For You? Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our benefits include: Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. ----------------------------------------------------------- Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf ----------------------------------------------------------- 20 - Professional (EEO-2 Job Categories-United States of America), IFTECH202.1 - Middle Professional Tier I (EEO Job Group), SWP Priority – Ratings - (Strategic Workforce Planning) Job ID: 311642 Posted On: 2025-06-02 Location: Hyderabad, Telangana, India
Posted 2 weeks ago
1.0 years
0 Lacs
Hyderābād
On-site
- 1+ years of data engineering experience - Experience with SQL - Experience with data modeling, warehousing and building ETL pipelines - Experience with one or more query language (e.g., SQL, PL/SQL, DDL, MDX, HiveQL, SparkSQL, Scala) - Experience with one or more scripting language (e.g., Python, KornShell) Do you have the technical prowess to build data solutions that process billions of rows a day using AWS technologies? Are you excited about creating real-time and batch analytics platforms that drive business decisions? Do you thrive on solving complex data challenges and implementing big data architectures? We're seeking a talented Data Engineer who is passionate about working with large-scale data analytics solutions and cloud technologies. First things first, you must know SQL and data modelling like the back of your hand. You need to know Big Data and MPP systems. You have a history of coming up with innovative solutions to complex technical problems. You are a quick and willing learner of new technologies. You are not tool-centric; You determine what technology works best for the problem at hand and apply it accordingly. You will work with top-notch technical professionals developing complex systems at scale and with a focus on sustained operational excellence. We are looking for people who are motivated by thinking big, moving fast, and exploring business insights. If you love to implement solutions to hard problems while working hard, having fun, and making history, this may be the opportunity for you. Experience with big data technologies such as: Hadoop, Hive, Spark, EMR Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.
Posted 2 weeks ago
0 years
2 - 9 Lacs
Hyderābād
On-site
Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. Responsibilities includes working on MDM platforms like ETL, data modelling, data warehousing and manage database related complex analysis, design, implement and support moderate to large sized databases. The role will help in providing production support and enhance existing data assets, design and develop ETL processes. They will be responsible for design and development of ETL processes for large data warehouse. Primary Responsibility: Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications: Undergraduate degree or equivalent experience Experience in Master Data Management Platform like ETL or EAI, Data warehousing concepts, code management, automated testing Experience in developing ETL design guidelines, standards and procedures to ensure a manageable ETL infrastructure across the enterprise Experience in HDFS/Hive /Spark/NoSQL - Hbase Solid command on MS SQL/Oracle SQL, Mongo DB, PL/SQL and Complex Data Analysis using SQL queries. Solid knowledge of data architecture concepts Solid knowledge of reporting and analytics concepts Knowledge of Software Engineering best practices with experience on implementing CI/CD using Jenkins Knowledge of the Agile methodology for delivering software solutions Preferred Qualifications: Development experience in Big Data eco system with the ability to design, develop, document & architect Hadoop applications Skills in SQL Server DB and Windows server file handling and Power Shell scripting At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone - of every race, gender, sexuality, age, location and income - deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission.
Posted 2 weeks ago
6.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Full-time Company Description NielsenIQ is a global measurement and data analytics company that provides the most complete and trusted view available of consumers and markets worldwide. We provide consumer packaged goods manufacturers/fast-moving consumer goods and retailers with accurate, actionable information and insights and a complete picture of the complex and changing marketplace that companies need to innovate and grow. Our approach marries proprietary NielsenIQ data with other data sources to help clients around the world understand what’s happening now, what’s happening next, and how to best act on this knowledge. We like to be in the middle of the action. That’s why you can find us at work in over 90 countries, covering more than 90% of the world’s population. For more information, visit www.niq.com. Job Description Our NielsenIQ Technology teams are working on our new “Connect” platform, a unified, global, open data ecosystem powered by Microsoft Azure. Our clients around the world rely on NielsenIQ’s data and insights to innovate and grow. As a Software Engineer, you’ll be part of a team of smart, highly skilled technologists who are passionate about learning and prototyping cutting-edge technologies. Right now our platform is based in Scala, Snowflake, Databricks, Python and we continue to adopt the best of breed in cloud-native, low-latency technologies. We value CI/CD in everything that we develop. Our team is co-located and agile, with central technology hubs in Chicago, Toronto and Chennai. Develop new BE functionalities working closely with the FE team. Contribute to the expansion of NRPS scope Qualifications We’re looking for people who have 6+ years of experience required Excellent level of experience with Python An experience in Scala and Databricks is appreciated. Knowledge in Trino and Hive and Oracle would be a plus Excellent English communication skills, with the ability to effectively interface across cross-functional technology teams and the business Minimum B.S. degree in Computer Science, Computer Engineering or related field Additional Information Our Benefits Flexible working environment Volunteer time off LinkedIn Learning Employee-Assistance-Program (EAP) About NIQ NIQ is the world’s leading consumer intelligence company, delivering the most complete understanding of consumer buying behavior and revealing new pathways to growth. In 2023, NIQ combined with GfK, bringing together the two industry leaders with unparalleled global reach. With a holistic retail read and the most comprehensive consumer insights—delivered with advanced analytics through state-of-the-art platforms—NIQ delivers the Full View™. NIQ is an Advent International portfolio company with operations in 100+ markets, covering more than 90% of the world’s population. For more information, visit NIQ.com Want to keep up with our latest updates? Follow us on: LinkedIn | Instagram | Twitter | Facebook Our commitment to Diversity, Equity, and Inclusion NIQ is committed to reflecting the diversity of the clients, communities, and markets we measure within our own workforce. We exist to count everyone and are on a mission to systematically embed inclusion and diversity into all aspects of our workforce, measurement, and products. We enthusiastically invite candidates who share that mission to join us. We are proud to be an Equal Opportunity/Affirmative Action-Employer, making decisions without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability status, age, marital status, protected veteran status or any other protected class. Our global non-discrimination policy covers these protected classes in every market in which we do business worldwide. Learn more about how we are driving diversity and inclusion in everything we do by visiting the NIQ News Center: https://nielseniq.com/global/en/news-center/diversity-inclusion I'm interested I'm interested Privacy Policy Show more Show less
Posted 2 weeks ago
4.0 years
3 - 8 Lacs
Gurgaon
On-site
Requirements: Experience – 4-8 years Good knowledge and hands on experience of Big Data (HDFS, Hive, Kafka) testing. (Must) Good knowledge and hands on experience of SQL(Must). Good knowledge and hands on experience of Linux (Must) Well versed with QA methodologies. Manual + Automation will work. Knowledge of DBT, AWS or Automation testing will be a plus. Job Type: Full-time Schedule: Day shift Work Location: In person
Posted 2 weeks ago
6.0 - 11.0 years
20 - 25 Lacs
Pune
Work from Office
Hi , Greetings from Nitor infotech! Please find below the Company Job Description with this mail 6+ years candidates only apply Immediate joiners Preferred Job Description: Primary Skills: •8+ years of Data Engineering experience with at least 3 years in senior roles. •5+ years of experience in Big Data technologies (e.g. Spark, Hive, Hadoop, etc.). •Strong Experience designing and implementing data pipelines. •Excellent knowledge of Data engineering concepts and best practices. •Proven ability to lead, mentor, inspire and support more junior team members. •Able to lead technical deliverables autonomously and lead more junior data engineers. •Strong attention to detail and working according to best practices. •Experience in designing solution using batch data processing methods, real-time streams, ETL processes and Business Intelligence tools. •Experience designing Logical Data Model and Physical Data Models including data warehouse and data mart designs. •Strong SQL knowledge & experience (T-SQL, working with SQL Server, SSMS) •Apache Spark: Advanced proficiency with Spark, including PySpark and SparkSQL, for distributed data processing. •Working knowledge of Apache Hive •Proficiency in Python, Pandas, PySpark (Scala knowledge is a plus). •Knowledge of Delta Lake concepts and common data formats, Lakehouse architecture. •Source control with Git. •Expertise in designing and implementing scalable data pipelines and ETL processes using the GCP data stack including: BigQuery, Dataflow, Pub/Sub, Cloud Storage, Cloud Composer, Cloud Functions, Dataproc (spark) •Apache Airflow - Expertise in building and managing ETL workflows using Airflow, including DAG creation, scheduling, and error handling. •Knowledge of CI/CD concepts, experience designing CI/CD for data pipelines. Secondary Skills: •Experience with Streaming services such as Kafka is a plus. •R & Sparklyr experience is a plus •Knowledge of MLOps concepts, AI/ML life-cycle management, Mlflow •Expertise in writing complex, highly optimized queries across large data sets to write data pipelines and data processing layers. •Jenkins Candidate Profile: Design, build, test and deploy innovative Big Data solutions at scale. Extract, Clean, transform, and analyse vast amounts of raw data from various Data Sources. Build data pipelines and API integrations with various internal systems. Work Across all stages of Data Lifecycle Implement best practices across all Data Analytics Processes Estimate effort, identify risks, and plan execution. Proactively monitor, identify, and escalate issues or root causes of systemic issues. Enable data scientists, business, and product partners to fully leverage our platform. Engage with business stakeholders to understand client requirements and build technical solutions and delivery plans. Evaluate and communicate technical risks effectively and ensure assignments delivery in scheduled time with desired quality. Provide end to end big data solution and design details to data engineering teams. Excellent analytical & problem-solving skills Excellent communication skills, experience communicating with Snr. Business stakeholders Leading technical delivery on use-cases, able to plan and delegate tasks to more junior team members, oversee the work from inception to final product. Key Required Skills: Apache Airflow, Kafka, SQL, Data Engineering, CI CD pipelines , Big Data, Apache Spark, Logical Data Model, Physical Data Model Wish you all the best! Thanks & Regards, VIGNESH 6379146150
Posted 2 weeks ago
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
Remote
Job Title : Data Analyst Work Location : Work is remote; however Chennai location is preferred Position Summary: We are looking for a highly analytical and technically proficient Data Analyst to lead the design, development, and deployment of data-driven solutions for global sanctions compliance. The ideal candidate will work cross-functionally with advisory, engineering, and business teams to translate complex regulatory requirements into scalable and efficient systems that support financial crime risk management and sanctions screening. Key Responsibilities: Collaborate with advisory, product, engineering, and business stakeholders to develop strategic frameworks for sanctions compliance. Interpret evolving regulatory requirements and coordinate with regional teams to define and finalize business requirements. Translate regulatory and business needs into scalable, data-driven solutions and partner with engineering teams for implementation. Refine PEP and sanctions list definitions in partnership with advisory and policy teams to ensure comprehensive and risk-aligned coverage. Lead the full lifecycle of sanctions screening solution development, from design through implementation, ensuring alignment with business and regulatory objectives. Provide technical mentorship and business guidance to junior data scientists. Partner with engineering teams to ensure seamless deployment and real-time integration of solutions in production environments. Drive modernization initiatives for sanctions platforms, leveraging big data technologies to enhance performance and scalability. Conduct advanced data analysis to identify inefficiencies in sanctions screening processes and recommend actionable improvements. Analyze large, complex datasets to derive insights, support decision-making, and communicate findings to stakeholders. Qualifications: Bachelor’s degree with at least 5 years of experience in sanctions transaction screening, profile screening, and AML. Deep understanding of global sanctions programs (e.g., OFAC, EU, UN, CSSF). Proficiency in SQL, BigQuery , Python, R, Tableau, and Power BI. Strong analytical and problem-solving skills with advanced SQL and data profiling expertise. Familiarity with Hadoop, Hive, Jupyter Notebooks, and data warehousing is highly desirable. Excellent communication skills with the ability to translate data insights into business recommendations. Proven ability to develop data-driven solutions for complex challenges and foster cross-functional collaboration. Strong strategic thinking, intellectual curiosity, and comfort with ambiguity. Demonstrated project leadership, business acumen, and multitasking capabilities in fast-paced environments. Show more Show less
Posted 2 weeks ago
175.0 years
5 - 8 Lacs
Gurgaon
On-site
At American Express, our culture is built on a 175-year history of innovation, shared values and Leadership Behaviors, and an unwavering commitment to back our customers, communities, and colleagues. As part of Team Amex, you'll experience this powerful backing with comprehensive support for your holistic well-being and many opportunities to learn new skills, develop as a leader, and grow your career. Here, your voice and ideas matter, your work makes an impact, and together, you will help us define the future of American Express. How will you make an impact in this role? Sales Enablement Organization focuses on accelerating commercial business growth through training, tools and insights to provide a best-in-class customer experience and create a culture of doing it the right way. Sales Ops & Governance Role This position will support the development and implementation of analytical solutions, to provide consultative support to the GCS leadership team. The incumbent will also highlight trends, risks, and opportunities to enhance business decision-making processes, while working very closely with Sales, Marketing, Capabilities, Technology, and Analytics teams to drive growth in the sales organization. Key Responsibilities: Perform in-depth data analysis to deliver strategic priorities focused on the sales enablement roadmap for Small/Medium Business Have outstanding knowledge of Python, SQL, and Hive, encompassing data manipulation and statistical modeling/data-mining techniques Ability to work with huge unstructured data, apply analytical thinking to diagnose business needs and establish analytical hypothesis and solutions Analyze, deep dive, explore to identify data gaps and problem solve them by collaborating across teams. Detailed execution of the development, validation and implementation of automated analytical solutions with minimal to no manual intervention Leverage predictive modeling to identify tactics for channel optimization of existing areas and conceptualize opportunities. Challenge status quo, innovate, and harbor strong curiosity. Proactively identify opportunities to improve processes by evaluating and challenging existing approaches Effectively challenge the conceptual soundness, theory, approach, and usages of predictive models Minimum Qualifications 3+ years of Database Architecture & Administration experience in a professional environment Bachelor’s Degree required, preferably in a quantitative field (e.g., Economics, Finance, Accounting, Statistics, Artificial Intelligence, Data Analytics, Engineering) Must Have - High proficiency in Python and SQL, with strong working knowledge of analytical tools (e.g., Hive, PySpark, scikit-learn etc.) Programming: SQL, SAS, Python/R, Unix scripting, Excel/VBA Experience in Big Data environment, inclusive of data mining techniques. Experience applying advanced statistical and/or quantitative techniques to solve business problems Hands-on analytics and machine learning (ML) experience with understanding of data processing and model validation. Ability to address performance issues and to manipulate both structured and unstructured data Advanced knowledge of Microsoft Office Suite (Excel pivot, macros, deck-writing) Ability to cultivate relationships and partner with multiple collaborators, with superb interpersonal and communication skills Ability to deliver results, work independently, and prioritize tasks Self-starter who thrives in an evolving, dynamic environment Preferred Qualifications Proficiency in CRM tools, Salesforce, or statistical software programs Big data platform (Hadoop, SPARK, NoSQL DB, RDBMS) Cloud Products & Services like Google Cloud Visualization: Tableau, Power BI, Power Automate, Splunk Servicing Platforms like Service Now Others: Confluence, Sharepoint or any other workflow and content management tool We back you with benefits that support your holistic well-being so you can be and deliver your best. This means caring for you and your loved ones' physical, financial, and mental health, as well as providing the flexibility you need to thrive personally and professionally: Competitive base salaries Bonus incentives Support for financial-well-being and retirement Comprehensive medical, dental, vision, life insurance, and disability benefits (depending on location) Flexible working model with hybrid, onsite or virtual arrangements depending on role and business need Generous paid parental leave policies (depending on your location) Free access to global on-site wellness centers staffed with nurses and doctors (depending on location) Free and confidential counseling support through our Healthy Minds program Career development and training opportunities American Express is an equal opportunity employer and makes employment decisions without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran status, disability status, age, or any other status protected by law. Offer of employment with American Express is conditioned upon the successful completion of a background verification check, subject to applicable laws and regulations.
Posted 2 weeks ago
3.0 years
0 Lacs
New Delhi, Delhi, India
On-site
The deadline for submission of expressions is 03/06/2025 at 12:00 midday (Brussels time). WE ARE The European External Action Service (EEAS) supports the work of the High Representative in defining and implementing an effective and coherent foreign policy of the European Union and in her tasks of conducting the EU's Common Foreign and Security Policy and chairing the Foreign Affairs Council. It also supports the High Representative in her capacity of Vice President of the Commission. The EEAS works in close cooperation with Member States, the Council and relevant services of the European Commission. Responsibilities The Budget and Human Resources Directorate, RM.BHR, is responsible for providing appropriate financial and human resources to the EEAS, both in Headquarters and in the EU Delegations: We establish and manage the overall budgetary resources of the EEAS. We carry out the selection and recruitment of all for non-management positions in the EEAS: officials, temporary agents, contract agents, seconded national experts. We support all EEAS HR policies with the objective of ensuring that the EEAS has the best-suited and highly motivated staff. We do so, by providing tools to EEAS staff to be in a position to acquire the right knowledge, skills and mind-set necessary to do the job in a modern and performing diplomatic service at any given time of their professional life. We ensure that the EU Delegations have the right administrative support, we help Delegations implement rules and procedures and support all local Agents employed in the EU Delegations. We Propose The EEAS is launching a call for expression of interest for the recruitment of one ASSISTANT (AST2) as Temporary Agent under Article 2(b) of the Conditions of Employment of other servants of the EU[1] in accordance with EEAS Decision Admin(2015) 20 on the engagement and use of temporary agents. Please note that, candidates who have been engaged by the EEAS as non-permanent servants (temporary and contract agents engaged under Article 3b of the CEOS), will be bound by Decision ADMIN(2023) 24 of the High Representative of the Union for Foreign Affairs and Security Policy of 14/07/2023 on the maximum duration of engagement by the European External Action Service of non-permanent staff under successive limited duration contracts of different types, and on the minimum lapse of time between successive contracts under Article 2(e) of the CEOS and repealing the Decision ADMIN(2020) 10 of the High Representative of the Union for Foreign Affairs and Security Policy of 16/07/2020. We Look For A dynamic, proactive and highly motivated colleague with very good communication and organisational skills and with a developed sense of service. The Administrative Assistant should understand the key priorities and functioning of the EU Diplomatic Service. He/she should be able to quickly adapt to the working environment and be a good team-player, able to handle a very heavy workload in a dynamic team. The successful candidate should have good computer skills with a sound knowledge of the standard IT EU applications and administrative procedures. Experience in an EU Delegation is a strong asset. He/she will be entrusted with the following main tasks: provide efficient secretarial support; carry out various administrative tasks such as diary-keeping and constant agenda management, filtering telephone calls, filing and ordering supplies, dealing with correspondence; document management: registration of incoming and outgoing correspondence, including in Ares; organize missions using MIPS/NEO applications; prioritise information flow, including reports from and contacts with Delegations, as well as interaction for meetings with high-level interlocutors; organize meetings and events and carry out the necessary arrangements for visitors to the building, including handling catering orders, registration, logistics and communication, as well as assist in welcoming and informing outside visitors in accordance with protocol and security rules; co-ordinate the creation, keeping up to date and retrieval of documents and data in the appropriate files or IT databases, including budgetary information; ensure the preparation of briefing files, speeches, coordination of inputs from various DGRM teams, etc. LEGAL BASIS The vacancy is to be filled in accordance with the conditions stipulated under the Conditions of Employment of Other Servants of the European Union (CEOS)[2], in accordance with EEAS Decision ADMIN(2015) 20 on the engagement and use of temporary agents The successful candidate will be offered a contract as Temporary Agent under Article 2(b) of the Conditions of Employment of Other Servants (CEOS) at the grade of AST2. Eligibility Criteria Candidates must meet ALL of the following general and specific conditions on the closing date for online applications : Be a national of one of the EU Member States and enjoy full rights as a citizen; Meet any obligations imposed on him or her by the laws concerning military service; Provide the appropriate character references[3] as to their suitability for the performance of their duties; Have the capacity to work in languages of the CFSP in writing and orally. Post-secondary education attested by a diploma followed by at least 3 years of relevant professional experience directly related to the nature of the duties OR secondary education attested by a diploma giving access to post-secondary education, followed by at least 6 years’ professional experience directly related to the nature of the duties. Candidates who, at the time of the application, are EU officials, independently of their administrative status under Article 35 of the SR, cannot request to be recruited as temporary agents under Article 2(b) of the CEOS and, in the interest of the service, are therefore ineligible. Selection Criteria Candidates should: have strong sense of discretion have proven experience in the secretarial field, experience in the field of external relations would be a strong asset, especially having served in EU Delegation(s); be well organised with the ability to deal with files in a timely manner; be dynamic and have a strong sense of responsibility; be a good team worker but also able to work autonomously and take initiatives when appropriate; have excellent computer skills (MS Office) and good knowledge of standard administrative, workflow based IT applications (ARES, MIPS/NEO, Sysper2, EU-Learn, RESCOM , e-Brief, CIS-Net/Decide, HIVE) and administrative procedures (e.g. e-Domec rules for the Document Management) be familiar with EEAS and Commission administrative and financial procedures; be able to use tools for communicating/exchanging classified information; have very good knowledge of English and knowledge of French hold or be in a position to obtain a valid Personnel Security Clearance (see below). Furthermore experience of working in a team in multi-disciplinary and multi-cultural environment; experience in working with or within other EU institutions would be considered as strong assets EXPRESSION OF INTEREST AND SELECTION PROCEDURE [4] : The selection procedure will take place in three different and successive steps: Expression of Interest Before submitting their application, candidates should carefully check whether they meet all the eligibility criteria in order to avoid exclusion from the selection procedure. Expressions of interest should be sent by e-mail to the following functional mailbox EEAS-HQ-APPLICATIONS-AST@eeas.europa.eu For the purposes of the e-mail application, the e-mail must have in the subject the following title: "EoI-HQ (AST)-TA2b-AST2-RM.BHR-487229" Such An Expression Of Interest Must Be Accompanied By an updated Curriculum vitae. Candidates are invited to use the "Europass" CV format (https://europass.cedefop.europa.eu/documents/curriculum-vitae) for their applications. a letter of motivation (maximum 2 pages) in either English or in French; the declaration of potential conflict of interest form filled in, dated and signed. The deadline for submission of expressions is 03/06/2025 at 12:00 midday (Brussels time). For correspondence concerning the selection procedure, please use the following email address (the e-mail must have in the subject the following title: "EoI-HQ (AST)-TA2b- AST2-RM.BHR-487229" ) : AST-STAFF@eeas.europa.eu Pre-selection The selection panel will make a pre-selection on the basis of the qualifications and the professional experience described in the CV and motivation letter. Selection The candidates who have passed the pre-selection step will be invited for an interview so that the selection panel can evaluate them objectively and impartially based on the selection criteria, as listed in the present call for expression of interest. Additional specific written or oral tests might be organised by the panel. Following a comparative assessment of the merits of the preselected candidates, the selection panel will recommend the name of a candidate to the Authority Authorised to Conclude Contracts of Employment, and possibly, name(s) of other candidate(s) that should be placed on a reserve list valid for maximum 12 months. That list would be used, on the one hand, in case of refusal of the offer or unavailability of the recommended candidate in the present procedure and/or, on the other hand, for other future similar recruitment needs. Depending on the outcome of future appointment procedure(s) under Article 29 of the SR for a similar profile, EEAS services may, in case no suitable candidates are found among EU officials, have other job opportunities that would involve the recruitment of the Temporary Agents under Article 2(b) of the CEOS. If any, candidates in the above-mentioned reserve list may be contacted by other EEAS services for potential recruitment. Equal Opportunities The EEAS is committed to an equal opportunities policy for all its employees and applicants for employment. As an employer, the EEAS is committed to promoting gender equality and to preventing discrimination on any grounds. It actively welcomes applications from all qualified candidates from diverse backgrounds and from the broadest possible geographical basis amongst the EU Member States. We aim at a service, which is truly representative of society, where each staff member feels respected, is able to give their best and can develop their full potential. If pre-selected, candidates with disabilities are invited to contact the EEAS (EEAS-HQ-APPLICATIONS-AST@eeas.europa.eu) in order to accommodate any special needs and provide assistance to ensure the possibility to pass the selection procedure in equality of opportunities with other candidates. If a candidate with a disability is selected and recruited, the EEAS is committed to appropriate measures in order to accommodate his or her special needs to the working place or working conditions in accordance with Art 1d(4) of the Staff Regulations. Recruitment The selected candidate will be recruited as temporary agents under Article 2(b) of the Conditions of Employment of Other Servants (CEOS), in accordance with EEAS Decision ADMIN(2015) 20 on the engagement and use of temporary agents. It is recalled, that if the interest of the service so requires, the selection procedures can be terminated at any stage and the post be filled by a reassignment in accordance with Article 7 of the SR and Article 10 of the CEOS. Conflict of Interest and security risks As a matter of policy, applications by individuals who have dual nationality of which one of a non-EU country, will be considered on a case-by-case basis taking account in particular of the functions attributed to the vacant post. The EEAS also examines if there could be a conflict of interest or security risks. In this context, candidates shall fill with their application a declaration of potential conflict of interest (Annex attached) Personal Security Clearance The requested level of security clearance for this post is: SECRET UE/EU SECRET. A description of the EU classified information levels is available under Article 2 of the Decision ADMIN(2023) 18 on the security rules for the EEAS [5]. The selected candidate should hold, or be in the position to obtain, a valid Personnel Security Clearance (PSC)[6] issued by the competent authority of the Member State concerned. Candidates who do not already have a valid PSC will be required to go through the security clearance vetting procedure of their Member State to obtain this clearance in accordance with national laws and regulations and with the procedure laid down in the Decision ADMIN(2019)7 on Security Clearance Requirements and Procedures for the EEAS of 08 March 2019 and in Annex A I of the Decision ADMIN(2023) 18 on the security rules for the EEAS [7]. Until the PSC is issued by the competent authority of the Member State concerned, the selected candidate will not be authorised to access EUCI at the level of CONFIDENTIEL UE/EU CONFIDENTIAL or above, or to participate in any meetings or workflow where EUCI is processed. Please note that the necessary procedure for obtaining a PSC can be initiated on request of the employer only, and not by the individual candidate. In case of failure to obtain or renew the required PSC, the AACC may take the appropriate measures in accordance with Article 3(3) of the Decision ADMIN(2019) 7 on Security Clearance Requirements and Procedures for the EEAS of 08 March 2019. Medical fitness The selected candidate will be required to undergo a medical fitness examination in accordance with Article 13 of the CEOS. In case of positive results on the medical fitness examination and after assessment of the PSC conditions, the candidate will be offered a contract as Temporary Agent, Grade AST2 [8] for a duration of 4 years renewable subject to the possibility of extension offered by the EEAS Decision ADMIN(2023) 24 on the maximum duration of engagement. All member of the temporary staff shall serve initially a nine-month probationary period in accordance with Article 14 CEOS. PLACE OF EMPLOYMENT: Brussels, Belgium POST AVAILABLE: 01/07/2025 Contact Ms Elsa Fenet Tel: +322 5842976 Email: Elsa.FENET@eeas.europa.eu The closing date for submissions is 03/062025 at 12:00 midday (Brussels time). [1] Staff Regulations of Officials (SR) and the Conditions of Employment of Other Servants of the European Union (CEOS). For reference see: https://eur-lex.europa.eu/legal-content/EN/TXT/?qid=1570023902133&uri=CELEX:01962R0031-20190101 [2] Staff Regulations of Officials (SR) and the Conditions of Employment of Other Servants of the European Union (CEOS). For reference, see https://eur-lex.europa.eu/legal-content/EN/TXT/?qid=1570023902133&uri=CELEX:01962R0031-20190101 [3] criminal records certificate [4] Your personal data will be processed in accordance with Regulation (EC) 2018/1725. The privacy statement is available on EEAS webpage: http://eeas.europa.eu/data_protection/rights/index_en.htm [5] OJ C 263, 26 July 2023, p.16. [6] The ‘Personnel Security Clearance’ is defined under point 2 of Annex A I of the Decision ADMIN(2023) 18 on the security rules of the EEAS as “ a statement by a competent authority of a Member State which is made following completion of a security investigation conducted by the competent authorities of a Member State and which certifies that an individual may, provided his ‘need-to-know’ has been determined, be granted access to EUCI up to a specified level (CONFIDENTIEL UE/EU CONFIDENTIAL or above) until a specified date; the individual thus described is said to be ‘security cleared’. ” [7] OJ C 263, 26 July 2023, p.38. [8] The basic salaries offered by the EU institutions are set out in Article 66 of the Staff Regulations. The current level, can be accessed via the link:https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=OJ:C_202301544 A number of factors are taken into account in calculating your pay. Use the pay calculator for an individual estimate: https://myintracomm.ec.europa.eu/staff/EN/working-conditions/pay/Pages/calculettes.aspx Show more Show less
Posted 2 weeks ago
6.0 years
6 - 9 Lacs
Noida
On-site
About Us : Paytm is India's leading mobile payments and financial services distribution company. Pioneer of the mobile QR payments revolution in India, Paytm builds technologies that help small businesses with payments and commerce. Paytm's mission is to serve half a billion Indians and bring them to the mainstream economy with the help of technology About the role: Business analytics focuses on data, statistical analysis and reporting to help investigate and analyze business performance, provide insights, and drive recommendations to improve performance. Experience : 6+ years Expectations/ Requirements 1. Creative and dedicated individual who will fit with our collaborative culture 2. Cohesively work with a lot of people, across functions and teams every day 3. Coordinate with other departments for compatibility of all aspects of each project 4. Develop comprehensive project plans along with key stakeholders 5. Program manage initiatives that are driven centrally for Technology improvements 6. Track Program/Project performance, specifically to analyze the successful completion of short- and long-term goals 7. Engage with various Business & Technology Teams within Paytm to identify common bottlenecks esp. on Technology front 8. Enable and encourage use of common services to increase the speed of development and execution 9. Smart thinking and clear communication 10. Use and continually develop leadership skills 11. Be a brand ambassador for Paytm – Stay Hungry, Stay Humble, Stay Relevant! Superpowers/ Skills that will help you succeed in this role: 1. Build and maintain analytical reports and dashboards to provide a deep view of performance of FASTAG funnel, user journeys and campaigns 2. Enable test and learn for understanding user behavior and target growth opportunities. Optimize the growth campaigns and deliver comprehensive test read outs to power accurate decisions 3. Understanding the broad range of Paytm data resources, and knowing the right ones to use for the analytical problems at hand 4. Evangelizing data driven decision making within the team and to business & product owners 5. Identifying data needs and driving data quality improvement projects. 6. Proficient in SQL/Hive and deep expertise in building scalable business reporting solutions 7. Past experience in optimizing business strategy, product or process using data & analytics 8. Working knowledge in at least one programming language like Scala, Java or Python 9. Working knowledge of Dashboard visualization and CLM tools. Ability to execute cross functional initiatives Education: Graduation/Post Graduation Why Join Us: 1. A collaborative output driven program that brings cohesiveness across businesses through technology 2. Improve the average revenue per use by increasing the cross-sell opportunities 3. A solid 360 feedback from your peer teams on your support of their goals 4. Respect, that is earned, not demanded from your peers and manager Compensation : If you are the right fit, we believe in creating wealth for you With enviable 500 mn+ registered users, 21 mn+ merchants and depth of data in our ecosystem, we are in a unique position to democratize credit for deserving consumers & merchants – and we are committed to it. India’s largest digital lending story is brewing here. It’s your opportunity to be a part of the story!
Posted 2 weeks ago
10.0 years
0 - 0 Lacs
Noida
On-site
JOB DESCRIPTION Our CompanyChanging the world through digital experiences is what Adobe’s all about. We give everyone—from emerging artists to global brands—everything they need to design and deliver exceptional digital experiences! We’re passionate about empowering people to create beautiful and powerful images, videos, and apps, and transform how companies interact with customers across every screen. We’re on a mission to hire the very best and are committed to creating exceptional employee experiences where everyone is respected and has access to equal opportunity. We realize that new ideas can come from everywhere in the organization, and we know the next big idea could be yours! About ConnectAdobe Connect, within Adobe DALP BU is one of the best online webinar and training delivery platform. The product has a huge customer base which has been using it for many years. The product has evolved magnificently over a period of time ensuring it stay on top of the latest tech stack. It offers opportunity to look at plethora of technologies on both client and server side.What You’ll Do: · Hands-on Machine Learning Engineer who will release models in production.· Develop classifiers, predictive models, and multi-variate optimization algorithms on large-scale datasets using advanced statistical modeling, machine learning, and data mining.· Special focus on R&D that will be building predictive models for conversion optimization, Bidding algorithms for pacing & optimization, Reinforcement learning problems, and Forecasting.· Collaborate with Product Management to bring AI-based Assistive experiences to life. Socialize what’s possible now or in the near future to inform the roadmap.· Responsible for driving all aspects of ML product development: ML modeling, data/ML pipelines, quality evaluations, productization, and ML Ops.· Create and instill a team culture that focuses on sound scientific processes and encourages deep engagement with our customers.· Handle project scope and risks with data, analytics, and creative problem-solving.What you require: · Solid foundation in machine learning, classifiers, statistical modeling and multivariate optimization techniques· Experience with control systems, reinforcement learning problems, and contextual bandit algos.· Experience with DNN frameworks like TensorFlow or PyTorch on large-scale data sets.· TensorFlow, R, scikit, pandas· Proficient in one or more: Python, Java/Scala, SQL, Hive, Spark· Good to have - Git, Docker, Kubernetes. GenAI, RAG pipelines a must have technology. Cloud based solutions is good to have· General understanding of data structures, algorithms, multi-threaded programming, and distributed computing concepts· Ability to be a self-starter and work closely with other data scientists and software engineers to design, test, and build production-ready ML and optimization models and distributed algorithms running on large-scale data sets.Ideal Candidate Profile: · A total of 10+ years of experience, including at least 5 years in technical roles involving Data Science, Machine Learning, or Statistics.· Masters or B.Tech in Computer Science/ Statistics· Comfort with ambiguity, adaptability to evolving priorities, and the ability to lead a team while working autonomously.· Proven management experience with highly diverse and global teams.· Demonstrated ability to influence technical and non-technical stakeholders.· Proven ability to effectively manage in a high-growth, matrixed organization.· Track record of delivering cloud-scale, data-driven products, and services that are widely adopted with large customer bases.· An ability to think strategically, look around corners, and create a vision for the current quarter, the year, and five years down the road.· A relentless pursuit of great customer experiences and continuous improvements to the product. Adobe is proud to be an Equal Employment Opportunity employer. We do not discriminate based on gender, race or color, ethnicity or national origin, age, disability, religion, sexual orientation, gender identity or expression, veteran status, or any other applicable characteristics protected by law. Adobe aims to make Adobe.com accessible to any and all users. If you have a disability or special need that requires accommodation to navigate our website or complete the application process, email accommodations@adobe.com or call (408) 536-3015.
Posted 2 weeks ago
2.0 - 4.0 years
2 - 8 Lacs
Noida
On-site
Expertise in AWS services like EC2, CloudFormation, S3, IAM, SNS, SQS, EMR, Athena, Glue, lake formation etc. ? Expertise in Hadoop/EMR/DataBricks with good debugging skills to resolve hive and spark related issues. Sound fundamentals of database concepts and experience with relational or non-relational database types such as SQL, Key-Value, Graphs etc. Experience in infrastructure provisioning using CloudFormation, Terraform, Ansible, etc. Experience in programming languages such as Python/PySpark. Excellent written and verbal communication skills. Key Responsibilities Working closely with the Data lake engineers to provide technical guidance, consultation and resolution of their queries. Assist in development of simple and advanced analytics best practices, processes, technology & solution patterns and automation (including CI/CD) Working closely with various stakeholders in US team with a collaborative approach. Develop data pipeline in python/pyspark to be executed in AWS cloud. Set up analytics infrastructure in AWS using cloud formation templates. Develop mini/micro batch, streaming ingestion patterns using Kinesis/Kafka. Seamlessly upgrading the application to higher version like Spark/EMR upgrade. Participates in the code reviews of the developed modules and applications. Provides inputs for formulation of best practices for ETL processes / jobs written in programming languages such as PySpak and BI processes. Working with column-oriented data storage formats such as Parquet , interactive query service such as Athena and event-driven computing cloud service - Lambda Performing R&D with respect to the latest and greatest Big data in the market, perform comparative analysis and provides recommendations to choose the best tool as per the current and future needs of the enterprise. Required Qualifications Bachelors or Masters degree in Computer Science or similar field 2-4 years of strong expeirence in big data development Expertise in AWS services like EC2, CloudFormation, S3, IAM, SNS, SQS, EMR, Athena, Glue, lake formation etc. Expertise in Hadoop/EMR/DataBricks with good debugging skills to resolve hive and spark related issues. Sound fundamentals of database concepts and experience with relational or non-relational database types such as SQL, Key-Value, Graphs etc. Experience in infrastructure provisioning using CloudFormation, Terraform, Ansible, etc. Experience in programming languages such as Python/PySpark. Excellent written and verbal communication skills. Preferred Qualifications Cloud certification (AWS, Azure or GCP) About Our Company Ameriprise India LLP has been providing client based financial solutions to help clients plan and achieve their financial objectives for 125 years. We are a U.S. based financial planning company headquartered in Minneapolis with a global presence. The firm’s focus areas include Asset Management and Advice, Retirement Planning and Insurance Protection. Be part of an inclusive, collaborative culture that rewards you for your contributions and work with other talented individuals who share your passion for doing great work. You’ll also have plenty of opportunities to make your mark at the office and a difference in your community. So if you're talented, driven and want to work for a strong ethical company that cares, take the next step and create a career at Ameriprise India LLP. Ameriprise India LLP is an equal opportunity employer. We consider all qualified applicants without regard to race, color, religion, sex, genetic information, age, sexual orientation, gender identity, disability, veteran status, marital status, family status or any other basis prohibited by law. Full-Time/Part-Time Full time Timings (2:00p-10:30p) India Business Unit AWMPO AWMP&S President's Office Job Family Group Technology
Posted 2 weeks ago
1.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Description Do you have the technical prowess to build data solutions that process billions of rows a day using AWS technologies? Are you excited about creating real-time and batch analytics platforms that drive business decisions? Do you thrive on solving complex data challenges and implementing big data architectures? We're seeking a talented Data Engineer who is passionate about working with large-scale data analytics solutions and cloud technologies. First things first, you must know SQL and data modelling like the back of your hand. You need to know Big Data and MPP systems. You have a history of coming up with innovative solutions to complex technical problems. You are a quick and willing learner of new technologies. You are not tool-centric; You determine what technology works best for the problem at hand and apply it accordingly. You will work with top-notch technical professionals developing complex systems at scale and with a focus on sustained operational excellence. We are looking for people who are motivated by thinking big, moving fast, and exploring business insights. If you love to implement solutions to hard problems while working hard, having fun, and making history, this may be the opportunity for you. Basic Qualifications 1+ years of data engineering experience Experience with SQL Experience with data modeling, warehousing and building ETL pipelines Experience with one or more query language (e.g., SQL, PL/SQL, DDL, MDX, HiveQL, SparkSQL, Scala) Experience with one or more scripting language (e.g., Python, KornShell) Preferred Qualifications Experience with big data technologies such as: Hadoop, Hive, Spark, EMR Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI HYD 13 SEZ Job ID: A2998264 Show more Show less
Posted 2 weeks ago
4.0 - 8.0 years
11 - 12 Lacs
Gurugram
Work from Office
Big Data Tester Requirements: • Experience 4-8 years • Good knowledge and hands on experience of Big Data (HDFS, Hive, Kafka) testing. (Must) • Good knowledge and hands on experience of SQL(Must). • Good knowledge and hands on experience of Linux (Must) • Well versed with QA methodologies. • Manual + Automation will work. • Knowledge of DBT, AWS or Automation testing will be a plus.
Posted 2 weeks ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Introduction In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology Your Role And Responsibilities As an Application Developer, you will lead IBM into the future by translating system requirements into the design and development of customized systems in an agile environment. The success of IBM is in your hands as you transform vital business needs into code and drive innovation. Your work will power IBM and its clients globally, collaborating and integrating code into enterprise systems. You will have access to the latest education, tools and technology, and a limitless career path with the world’s technology leader. Come to IBM and make a global impact Responsibilities Responsible to manage end to end feature development and resolve challenges faced in implementing the same Learn new technologies and implement the same in feature development within the time frame provided Manage debugging, finding root cause analysis and fixing the issues reported on Content Management back end software system fixing the issues reported on Content Management back end software system Preferred Education Master's Degree Required Technical And Professional Expertise Tableau Desktop & Server SQL, Oracle & Hive, Communication Skills, Project Management Multitasking, Collaborative Skills Proven experience in developing and working Tableau driven dashboards, analytics. Ability to query and display large data sets while maximizing the performance of workbook. Ability to interpret technical or dashboard structure and translate complex business requirements to technical Preferred Technical And Professional Experience Tableau Desktop & Server SQL ,Oracle & Hive Show more Show less
Posted 2 weeks ago
5.0 - 8.0 years
0 Lacs
Gurugram, Haryana, India
Remote
Lead Data Scientist Title: Lead Data Scientist Role Description: This is a full-time remote role for a Lead Data Scientist at Birdeye. Primary Skills: Data Science, AI/ML, GenAI, LLM, NLP, Any Cloud (Azure/AWS/GCP), MLOps Roles & Responsibilities: Develop custom data models and algorithms to apply to data sets. Developing and implementing advanced machine learning models to address complex problems. Clearly communicate findings and recommendations, data-driven insights to PMs and executives. Develop custom data models and algorithms to apply to data sets. Developing and implementing advanced machine learning models to address complex problems. Lead and mentor a team of data scientists, providing guidance on best practices and technical expertise. Collaborate with cross-functional teams to define data-driven strategies and integrate data science solutions into products. Continuously evaluate and improve the performance of existing models and algorithms. Stay updated with the latest advancements in AI/ML and propose innovative solutions to enhance the company's capabilities. Qualifications: Bachelors/Masters degree in Computer Science, Engineering, or a related technical field with minimum 5-8 years' experience. Minimum of 2 years of experience in leading a team of junior Data Scientist. Should have been involved in end-to-end delivery of the scalable, optimized and enterprise AI solutions. Experience in performing prompt engineering and fine-tuning of the AI/ML, GenAI, LLM models. Practical hands-on fine-tuning/transfer learning/optimization of the Transformer architecture based Deep Learning models. Experience in NLP tools such as Word2Vec, TextBlob, NLTK, SpaCy, Gensim, CoreNLP, BERT, GloVe etc. Experience in AWS / Azure / GCP cloud and deploying the APIs using the latest frameworks like FastAPI/ gRPC etc. Experience in Docker for deploying the containers. Deployment of the ML models on the Kubernetes clusters Experience in NoSQL/SQL databases. Good programming skills in Python. Domain knowledge in Online Reputation management or experience in a product-based company (added advantage). Expertise in delivering end-to-end analytical solutions covering multiple technologies & tools to multiple business problems. Understanding of big data technologies like Hadoop, Spark, and Hive. Strong knowledge of statistical methods and experimental design. Experience with MLOps practices for CI/CD in machine learning. Strong project management skills, with the ability to manage multiple projects simultaneously. Excellent communication and presentation skills, with the ability to explain complex technical concepts to non-technical stakeholders. Interested candidates, please send their resumes to iqbal.kaur@birdeye.com Regards Iqbal kaur Show more Show less
Posted 2 weeks ago
5.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Roles And Responsibilities Design, develop and deploy machine learning models, algorithms and systems to solve complex business problems Theoretical understanding and practise of machine learning and expertise in one or more of the topics, such as, NLP, Computer Vision, recommender systems and Optimisation. Implement robust and reliable software solutions for model deployment. Support the team in maintaining machine learning pipelines, contributing to tasks like data cleaning, feature extraction and basic model training. Participate in monitoring the performance of machine learning models, gaining experience in using statistical methods for evaluation. Working with the Data Platforms teams for understanding and collecting the data. Conduct performance testing, troubleshooting and tuning as required. Stay current with the latest research and technology and communicate your knowledge throughout the enterprise. Qualifications & Experience Master’s Degree or PhD in the fields of Computer Science, Statistics, or an equivalent with a minimum of 5 years of experience in data science.. Should have worked with at least one GenAI project Should have expertise in Deep Learning & Reinforcement Learning techniques Should have experience in building statistical models using the above-mentioned techniques Should possess ability to build & deploy the models and interpret the results to business users Should have experience working with NLP models and building related applications such as chatbot, text-classifications, etc Should have experience working with big data tools – Hadoop, Hive, Spark. Proficiency in Python, good understanding of OOPs is a must. Worked with at least one mainstream machine learning framework such as Tensor Flow,pyTorch,langchain Experience working with Cloud platforms (Azure, AWS, GCP) Should have ability to write complex SQL & stored procedures Should have experience in working with API’s - deploying models, extracting data. Should have good communication skills and ability to interact with customers Show more Show less
Posted 2 weeks ago
4.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
We are looking for a Senior Software Engineer to join our IMS Team in Bangalore. This is an amazing opportunity to work on Big Data technologies involved in content ingestion. The team consists of 10-12 engineers and is reporting to the Sr Manager. We have a great skill set in Spark, Java, Scala, Hive, Sql, XSLT, AWS EMR, S3, etc and we would love to speak with you if you have skills in the same. About You – Experience, Education, Skills, And Accomplishments Work Experience: Minimum 4 years’ experience in Big Data projects involved in content ingestion, curation, transformation Technical Skill: Spark, Python/Java, Scala, AWS EMR, S3, SQS, Hive, XSLT Education (bachelor’s degree in computer science, mechanical engineering, or related degree or at least 4 years of equivalent relevant experience) It Would Be Great If You Also Had Experience in analyzing and optimizing performance Exposure to any automation test frameworks Databricks Java / Python programming What will you be doing in this role? Active role in planning, estimation, design, development and testing of large-scale, enterprise-wide initiatives to build or enhance a platform or custom applications that will be used for the acquisition, transformation, entity extraction, mining of content on behalf of business units across Clarivate Analytics Troubleshooting and addressing production issues within the given SLA Coordination with global representatives and teams About The Team We are a 12 member strong team based out of India. We have end to end development and support ownership of the IMS product that plays a vital role in content ingestion, aggregation, transformation, and enrichment of content, that is transmitted to downstream applications. Our development methodology is Agile , and our system heavily relies on AWS EMR services. Hours Of Work 45 hours per week, permanent position. At Clarivate, we are committed to providing equal employment opportunities for all qualified persons with respect to hiring, compensation, promotion, training, and other terms, conditions, and privileges of employment. We comply with applicable laws and regulations governing non-discrimination in all locations. Show more Show less
Posted 2 weeks ago
6.0 years
0 Lacs
Trivandrum, Kerala, India
On-site
Role Description We are looking for an Intermediate Data Engineer with 6 years of proven experience in building scalable data pipelines and managing large-scale data processing systems. The ideal candidate should have strong hands-on expertise in PySpark , SQL , and cloud platforms (preferably GCP ), along with experience working on big data technologies and orchestration tools. Key Responsibilities Design, implement, and optimize data pipelines for large-scale data processing. Develop and maintain ETL/ELT workflows using Spark, Hadoop, Hive, and Airflow. Collaborate with data scientists, analysts, and engineers to ensure data availability and quality. Write efficient and optimized SQL queries for data extraction, transformation, and analysis. Leverage PySpark and cloud tools (preferably Google Cloud Platform) to build reliable and scalable solutions. Monitor and troubleshoot data pipeline performance and reliability issues. Required Skills 4–6 years of experience in a Data Engineering role. Strong hands-on experience with PySpark and SQL. Good working knowledge of GCP or any major cloud platform (AWS, Azure). Experience with Hadoop, Hive, and distributed data systems. Proficiency in data orchestration tools such as Apache Airflow. Ability to work independently in a fast-paced, agile environment. Good To Have Experience with data modeling and data warehousing concepts. Exposure to DevOps and CI/CD practices for data pipelines. Familiarity with other programming/scripting languages (Python, Shell scripting). Educational Qualification Bachelor’s or Master’s degree in Computer Science, Information Technology, Engineering, or a related field. Skills Pyspark,Gcp,Hadoop,Hive Show more Show less
Posted 2 weeks ago
0.0 - 3.0 years
0 Lacs
Indore, Madhya Pradesh
On-site
Location: Indore - Work from Office Job Type: Full-Time Experience: 3 to 4 Years Joining: Immediate Job Description: We are looking for a passionate and talented Flutter Developer with 3 to 4 years of experience to join our growing team. The ideal candidate should have hands-on experience in developing cross-platform mobile applications using Flutter and be ready to contribute to a fast-paced development environment. Key Responsibilities : ✓Develop and maintain cross-platform mobile applications using Flutter. ✓Collaborate with UI/UX designers and backend developers to deliver high-quality products. ✓Integrate RESTful APIs and third-party libraries. ✓Write clean, maintainable, and efficient code. ✓Perform unit and integration testing to ensure app reliability. ✓Troubleshoot and resolve bugs, performance issues, and crashes. ✓Stay up to date with the latest Flutter and Dart advancements. Requirements : ✓3 to 4 years of proven experience in Flutter development. ✓Proficient in Dart programming language. ✓Solid understanding of Flutter framework and its ecosystem. ✓Experience with state management tools like Provider, Riverpod, or BLoC. ✓Familiarity with integrating APIs, Firebase, and local databases like SQLite or Hive. ✓Understanding of mobile app design principles and best practices. ✓Experience with version control systems like Git. ✓Strong debugging and performance tuning skills. Job Types: Full-time, Permanent Pay: ₹25,000.00 - ₹40,000.00 per month Schedule: Day shift Fixed shift Monday to Friday Application Question(s): Can you start immediately? Experience: Flutter: 3 years (Required) Dart: 3 years (Required) Location: Indore, Madhya Pradesh (Required) Work Location: In person
Posted 2 weeks ago
4.0 - 8.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
We are here with Walk-in drive for Cloud technology professionals. Yes, Redefine your career with us! Join the walk-in drive at TCS Chennai Role : AWS data engineer Experience : 4 to 8 years Walk In Drive Date : 7th June 2025 Registration Time : 09:30 AM – 12:30PM Venue : TCS Chennai Siruseri Office 1/G1, SIPCOT IT Park Navalur, Siruseri, Tamil Nadu 603103 Roles and Responsibilities: · Good hands on experience in Python programming and Pyspark · Data Engineering experience using AWS core services (Lambda, Glue, EMR and RedShift) · Required skill set- SQL, Airflow · Must – Have Experience with snowflake or Hive or AWS S3 · Responsibility – Will be accountable for build/test complex data pipelines (batch and near real time) · Expectation – Readable documentation of all the components being develop · Experience in writing SQLs and stored procedures · Working experience with RDBMS (Oracle / Teradata) Show more Show less
Posted 2 weeks ago
5.0 - 8.0 years
15 - 25 Lacs
Pune
Work from Office
Design, develop & maintain scalable data pipelines & systems using PySpark and other big data tools. Monitor, troubleshoot & resolve issues in data workflows & pipelines. Implement best practices for data processing, security & storage. Required Candidate profile Strong programming skills in PySpark & Python. Exp with big data frameworks like Hadoop, Spark, or Kafka. Proficiency in working with cloud platforms. Exp with data modeling & working with databases.
Posted 2 weeks ago
6.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
About Us : Paytm is India's leading mobile payments and financial services distribution company. Pioneer of the mobile QR payments revolution in India, Paytm builds technologies that help small businesses with payments and commerce. Paytm's mission is to serve half a billion Indians and bring them to the mainstream economy with the help of technology About the role: Business analytics focuses on data, statistical analysis and reporting to help investigate and analyze business performance, provide insights, and drive recommendations to improve performance. Experience : 6+ years Expectations/ Requirements 1. Creative and dedicated individual who will fit with our collaborative culture 2. Cohesively work with a lot of people, across functions and teams every day 3. Coordinate with other departments for compatibility of all aspects of each project 4. Develop comprehensive project plans along with key stakeholders 5. Program manage initiatives that are driven centrally for Technology improvements 6. Track Program/Project performance, specifically to analyze the successful completion of short- and long-term goals 7. Engage with various Business & Technology Teams within Paytm to identify common bottlenecks esp. on Technology front 8. Enable and encourage use of common services to increase the speed of development and execution 9. Smart thinking and clear communication 10. Use and continually develop leadership skills 11. Be a brand ambassador for Paytm – Stay Hungry, Stay Humble, Stay Relevant! Superpowers/ Skills that will help you succeed in this role: 1. Build and maintain analytical reports and dashboards to provide a deep view of performance of FASTAG funnel, user journeys and campaigns 2. Enable test and learn for understanding user behavior and target growth opportunities. Optimize the growth campaigns and deliver comprehensive test read outs to power accurate decisions 3. Understanding the broad range of Paytm data resources, and knowing the right ones to use for the analytical problems at hand 4. Evangelizing data driven decision making within the team and to business & product owners 5. Identifying data needs and driving data quality improvement projects. 6. Proficient in SQL/Hive and deep expertise in building scalable business reporting solutions 7. Past experience in optimizing business strategy, product or process using data & analytics 8. Working knowledge in at least one programming language like Scala, Java or Python 9. Working knowledge of Dashboard visualization and CLM tools. Ability to execute cross functional initiatives Education: Graduation/Post Graduation Why Join Us: 1. A collaborative output driven program that brings cohesiveness across businesses through technology 2. Improve the average revenue per use by increasing the cross-sell opportunities 3. A solid 360 feedback from your peer teams on your support of their goals 4. Respect, that is earned, not demanded from your peers and manager Compensation : If you are the right fit, we believe in creating wealth for you With enviable 500 mn+ registered users, 21 mn+ merchants and depth of data in our ecosystem, we are in a unique position to democratize credit for deserving consumers & merchants – and we are committed to it. India’s largest digital lending story is brewing here. It’s your opportunity to be a part of the story! Show more Show less
Posted 2 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Hive is a popular data warehousing tool used for querying and managing large datasets in distributed storage. In India, the demand for professionals with expertise in Hive is on the rise, with many organizations looking to hire skilled individuals for various roles related to data processing and analysis.
These cities are known for their thriving tech industries and offer numerous opportunities for professionals looking to work with Hive.
The average salary range for Hive professionals in India varies based on experience level. Entry-level positions can expect to earn around INR 4-6 lakhs per annum, while experienced professionals can earn upwards of INR 12-15 lakhs per annum.
Typically, a career in Hive progresses from roles such as Junior Developer or Data Analyst to Senior Developer, Tech Lead, and eventually Architect or Data Engineer. Continuous learning and hands-on experience with Hive are crucial for advancing in this field.
Apart from expertise in Hive, professionals in this field are often expected to have knowledge of SQL, Hadoop, data modeling, ETL processes, and data visualization tools like Tableau or Power BI.
As you explore job opportunities in the field of Hive in India, remember to showcase your expertise and passion for data processing and analysis. Prepare well for interviews by honing your skills and staying updated with the latest trends in the industry. Best of luck in your job search!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2