Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
10.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Detailed Job Description For Solution Architect At PAN India Architectural Assessment Road mapping Conduct a comprehensive assessment of the current R&D Data Lake architecture. Propose and design the architecture for the next-generation self-service R&D Data Lake based on defined product specifications. Contribute to defining a detailed architectural roadmap that incorporates the latest enterprise patterns and strategic recommendations for the engineering team. Data Ingestion & Processing Enhancements Design and prototype updated data ingestion mechanisms that meet GxP validation requirements and improve data flow efficiency. Architect advanced data and metadata processing techniques to enhance data quality and accessibility Storage Patterns Optimization Evaluate optimized storage patterns to ensure scalability, performance, and cost-effectiveness. Design updated storage solutions aligned with technical roadmap objectives and compliance standards. Data Handling & Governance Define and document standardized data handling procedures that adhere to GxP and data governance policies. Collaborate with governance teams to ensure procedures align with regulatory standards and best practices. Assess current security measures and implement robust access controls to protect sensitive R&D data. Ensure that all security enhancements adhere to enterprise security frameworks and regulatory requirements. Design and implement comprehensive data cataloguing procedures to improve data discoverability and usability. Integrate cataloguing processes with existing data governance frameworks to maintain continuity and compliance. Recommend and oversee the implementation of new tools and technologies related to ingestion, storage, processing, handling, security, and cataloguing. Design and plan to ensure seamless integration and minimal disruption during technology updates. Collaborate on the ongoing maintenance and provide technical support for legacy data ingestion pipelines throughout the uplift project. Ensure legacy systems remain stable, reliable, and efficient during the transition period Work closely with the R&D IT team, data governance groups, and other stakeholders for coordinated and effective implementation of architectural updates. Collaborate in the knowledge transfer sessions to equip internal teams to manage and maintain the new architecture post-project. Required Skills Bachelor’s degree in Computer Science, Information Technology, or a related field with equivalent hands-on experience. Minimum 10 years of experience in solution architecture, with a strong background in data architecture and enterprise data management Strong understanding of cloud-native platforms, with a preference for AWS. Knowledgeable in distributed data architectures, including services like S3, Glue, and Lake Formation. Proven experience in programming languages and tools relevant to data engineering (e.g., Python, Scala). Experienced with Big Data technologies like: Hadoop, Cassandra, Spark, Hive, and Kafka. Skilled in using querying tools such as Redshift, Spark SQL, Hive, and Presto. Demonstrated experience in data modeling, data pipelines development and data warehousing. Infrastructure And Deployment Familiar with Infrastructure-as-Code tools, including Terraform and CloudFormation. Experienced in building systems around the CI/CD concept. Hands-on experience with AWS services and other cloud platforms. Show more Show less
Posted 1 week ago
15.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
As a Principal Architect, you will work to solve some of the most complex and captivating data management problems that would enable them as a data-driven organization; Seamlessly switch between roles of an Individual Contributor, team member, and an Architect as demanded by each project to define, design, and deliver actionable insights. On a typical day, you might Understand the Business Requirements and translate business requirements into conceptual, logical and physical Data models. Work as a principal advisor on data architecture, across various data requirements, aggregation – data lake – data models – data warehouse etc. Lead cross-functional teams, define data strategies, and leverage the latest technologies in data handling. Define and govern data architecture principles, standards, and best practices to ensure consistency, scalability, and security of data assets across projects. Suggest best modelling approach to the client based on their requirement and target architecture. Analyze and understand the Datasets and guide the team in creating Source to Target Mapping and Data Dictionaries, capturing all relevant details. Profile the Data sets to generate relevant insights. Optimize the Data Models and work with the Data Engineers to define the Ingestion logic, ingestion frequency and data consumption patterns. Establish data governance practices, including data quality, metadata management, and data lineage, to ensure data accuracy, reliability, and compliance. Drives automation in modeling activities Collaborate with Business Stakeholders, Data Owners, Business Analysts, Architects to design and develop next generation data platform. Closely monitor the Project progress and provide regular updates to the leadership teams on the milestones, impediments etc. Guide /mentor team members, and review artifacts. Contribute to the overall data strategy and roadmaps. Propose and execute technical assessments, proofs of concept to promote innovation in the data space. What do we expect? Skills that we’d love! Minimum 15 years of experience Deep understanding of data architecture principles, data modelling, data integration, data governance, and data management technologies. Experience in Data strategies and developing logical and physical data models on RDBMS, NoSQL, and Cloud native databases. Decent experience in one or more RDBMS systems (such as Oracle, DB2, SQL Server) • Good understanding of Relational, Dimensional, Data Vault Modelling Experience in implementing 2 or more data models in a database with data security and access controls. Good experience in OLTP and OLAP systems Excellent Data Analysis skills with demonstrable knowledge on standard datasets and sources. Good Experience on one or more Cloud DW (e.g. Snowflake, Redshift, Synapse) Experience on one or more cloud platforms (e.g. AWS, Azure, GCP) Understanding of DevOps processes Hands-on experience in one or more Data Modelling Tools Good understanding of one or more ETL tool and data ingestion frameworks Understanding of Data Quality and Data Governance Good understanding of NoSQL Database and modeling techniques Good understanding of one or more Business Domains Understanding of Big Data ecosystem Understanding of Industry Data Models Hands-on experience in Python Experience in leading the large and complex teams Good understanding of agile methodology You are important to us, let’s stay connected! Every individual comes with a different set of skills and qualities so even if you don’t tick all the boxes for the role today, we urge you to apply as there might be a suitable/unique role for you tomorrow. We are an equal opportunity employer. Our diverse and inclusive culture and values guide us to listen, trust, respect, and encourage people to grow the way they desire. Show more Show less
Posted 1 week ago
20.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Description Over the past 20 years Amazon has earned the trust of over 300 million customers worldwide by providing unprecedented convenience, selection and value on Amazon.com. By deploying Amazon Pay’s products and services, merchants make it easy for these millions of customers to safely purchase from their third party sites using the information already stored in their Amazon account. In this role, you will lead Data Engineering efforts to drive automation for Amazon Pay organization. You will be part of the data engineering team that will envision, build and deliver high-performance, and fault-tolerant data pipeliens. As a Data Engineer, you will be working with cross-functional partners from Science, Product, SDEs, Operations and leadership to translate raw data into actionable insights for stakeholders, empowering them to make data-driven decisions. Key job responsibilities Design, implement, and support a platform providing ad-hoc access to large data sets Interface with other technology teams to extract, transform, and load data from a wide variety of data sources Implement data structures using best practices in data modeling, ETL/ELT processes, and SQL, Redshift, and OLAP technologies Model data and metadata for ad-hoc and pre-built reporting Interface with business customers, gathering requirements and delivering complete reporting solutions Build robust and scalable data integration (ETL) pipelines using SQL, Python and Spark. Build and deliver high quality data sets to support business analyst, data scientists, and customer reporting needs. Continually improve ongoing reporting and analysis processes, automating or simplifying self-service support for customers Basic Qualifications 3+ years of data engineering experience Experience with data modeling, warehousing and building ETL pipelines Experience with SQL Preferred Qualifications Experience with AWS technologies like Redshift, S3, AWS Glue, EMR, Kinesis, FireHose, Lambda, and IAM roles and permissions Experience with non-relational databases / data stores (object storage, document or key-value stores, graph databases, column-family databases) Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI - Karnataka Job ID: A2986853 Show more Show less
Posted 1 week ago
8.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Senior Data Engineer Position Summary The Senior Data Engineer leads complex data engineering projects working on designing data architectures that align with business requirements This role focuses on optimizing data workflows managing data pipelines and ensuring the smooth operation of data systems Minimum Qualifications 8 Years overall IT experience with minimum 5 years of work experience in below tech skills Tech Skill Strong experience in Python Scripting and PySpark for data processing Proficiency in SQL dealing with big data over Informatica ETL Proven experience in Data quality and data optimization of data lake in Iceberg format with strong understanding of architecture Experience in AWS Glue jobs Experience in AWS cloud platform and its data services S3 Redshift Lambda EMR Airflow Postgres SNS Event bridge Expertise in BASH Shell scripting Strong understanding of healthcare data systems and experience leading data engineering teams Experience in Agile environments Excellent problem solving skills and attention to detail Effective communication and collaboration skills Responsibilities Leads development of data pipelines and architectures that handle large scale data sets Designs constructs and tests data architecture aligned with business requirements Provides technical leadership for data projects ensuring best practices and high quality data solutions Collaborates with product finance and other business units to ensure data pipelines meet business requirements Work with DBT Data Build Tool for transforming raw data into actionable insights Oversees development of data solutions that enable predictive and prescriptive analytics Ensures the technical quality of solutions managing data as it moves across environments Aligns data architecture to Healthfirst solution architecture Show more Show less
Posted 1 week ago
3.0 years
4 - 6 Lacs
Hyderābād
On-site
- 3+ years of data engineering experience - 4+ years of SQL experience - Experience with data modeling, warehousing and building ETL pipelines As a Data Engineer you will be working on building and maintaining complex data pipelines, assemble large and complex datasets to generate business insights and to enable data driven decision making and support the rapidly growing and dynamic business demand for data. You will have an opportunity to collaborate and work with various teams of Business analysts, Managers, Software Dev Engineers, and Data Engineers to determine how best to design, implement and support solutions. You will be challenged and provided with tremendous growth opportunity in a customer facing, fast paced, agile environment. Key job responsibilities * Design, implement and support an analytical data platform solutions for data driven decisions and insights * Design data schema and operate internal data warehouses & SQL/NOSQL database systems * Work on different data model designs, architecture, implementation, discussions and optimizations * Interface with other teams to extract, transform, and load data from a wide variety of data sources using AWS big data technologies like EMR, RedShift, Elastic Search etc. * Work on different AWS technologies such as S3, RedShift, Lambda, Glue, etc.. and Explore and learn the latest AWS technologies to provide new capabilities and increase efficiency * Work on data lake platform and different components in the data lake such as Hadoop, Amazon S3 etc. * Work on SQL technologies on Hadoop such as Spark, Hive, Impala etc.. * Help continually improve ongoing analysis processes, optimizing or simplifying self-service support for customers * Must possess strong verbal and written communication skills, be self-driven, and deliver high quality results in a fast-paced environment. * Recognize and adopt best practices in reporting and analysis: data integrity, test design, analysis, validation, and documentation. * Enjoy working closely with your peers in a group of talented engineers and gain knowledge. * Be enthusiastic about building deep domain knowledge on various Amazon’s business domains. * Own the development and maintenance of ongoing metrics, reports, analyses, dashboards, etc. to drive key business decisions. Experience with AWS technologies like Redshift, S3, AWS Glue, EMR, Kinesis, FireHose, Lambda, and IAM roles and permissions Experience with non-relational databases / data stores (object storage, document or key-value stores, graph databases, column-family databases) Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.
Posted 1 week ago
4.0 years
0 Lacs
Hyderābād
On-site
Do you love understanding every detail of how new technologies work? Join the team that serves as Apple’s nerve center, our Information Systems and Technology group. There are countless ways you’ll contribute here, whether you’re coordinating technology needs for product launches, designing music solutions for retail locations, or ensuring the strength of in-store Wi-Fi connections. From Apple Pay to the Apple website to our data centers around the globe, you’ll help design and manage the massive systems that countless employees and customers rely on every day. You’ll also build custom tools for employees, empowering them to solve complex problems on their own. Join our team, and together we’ll explore all the ways to improve how Apple operates, freeing our employees to do what they do best: craft magical experiences for our customers. The Global Business Intelligence team provides data services, analytics, reporting, and data science solutions to Apple’s business groups, including Retail, iTunes, Marketing, AppleCare, Operations, Finance, and Sales. These solutions are built on top of a great data platform and leverage multiple frameworks. This position is an extraordinary opportunity for a proficient, experienced, and driven data platform engineer to solve database design and optimization problems and provide a scalable, high-performance and dynamic Enterprise Data Warehouse (EDW) platform!. Description As a Cloud Data Platform Engineer, you will be responsible for leading all aspects of a database platform. This would include either database design, database security, DR strategy, develop standard processes, new feature evaluations or analyze workloads to identify optimization opportunities at a system and application level. You will be driving automation efforts to effectively manage the database platform, and build self service solutions for users. You will also be partnering with development teams, product managers and business users to review the solution design being deployed and provide recommendations to optimize and tune. This role will also address any platform wide performance and stability issues. We're looking for an individual who loves to take challenges, takes on problems with imaginative solutions, works well in collaborative teams to build and support a large Enterprise Data Warehouse. Minimum Qualifications 4+ years of experience in database technologies like Snowflake (preferred), Teradata, BigQuery or Redshift. Demonstrated ability working with Advanced SQL. Experience handling DBA functions, DR strategy, data security, governance, associated automation and tooling for a database platform. Key Qualifications Experience with object oriented programming in Python or Java. Analyze production workloads and develop strategies to run Snowflake database with scale and efficiency. Experience in performance tuning, capacity planning, managing cloud spend and utilization. Experience with SaaS/PaaS enterprise services on GCP/AWS or Azure is a plus Familiarity with in-memory database platforms like SingleStore is a plus Experience with Business intelligence (BI) platforms like Tableau, Thought-Spot and Business Objects is a plus Good communication and personal skills: ability to interact and work well with members of other functional groups in a project team and a strong sense of project ownership. Education & Experience Bachelor’s Degree in Computer Science Engineering or IT from a reputed school Submit CV
Posted 1 week ago
2.0 years
0 Lacs
Hyderabad, Telangana, India
Remote
Description Want to join the Earth’s most customer centric company? Do you like to dive deep to understand problems? Are you someone who likes to challenge Status Quo? Do you strive to excel at goals assigned to you? If yes, we have opportunities for you. Global Operations – Artificial Intelligence (GO-AI) at Amazon is looking to hire candidates who can excel in a fast-paced dynamic environment. Are you somebody that likes to use and analyze big data to drive business decisions? Do you enjoy converting data into insights that will be used to enhance customer decisions worldwide for business leaders? Do you want to be part of the data team which measures the pulse of innovative machine vision-based projects? If your answer is yes, join our team. GO-AI is looking for a motivated individual with strong skills and experience in resource utilization planning, process optimization and execution of scalable and robust operational mechanisms, to join the GO-AI Ops DnA team. In this position you will be responsible for supporting our sites to build solutions for the rapidly expanding GO-AI team. The role requires the ability to work with a variety of key stakeholders across job functions with multiple sites. We are looking for an entrepreneurial and analytical program manager, who is passionate about their work, understands how to manage service levels across multiple skills/programs, and who is willing to move fast and experiment often. Key job responsibilities Key Responsibilities Include Ability to maintain and refine straightforward ETL and write secure, stable, testable, maintainable code with minimal defects and automate manual processes. Proficiency in one or more industry analytics visualization tools (e.g. Excel, Tableau/Quicksight/PowerBI) and, as needed, statistical methods (e.g. t-test, Chi-squared) to deliver actionable insights to stakeholders. Building and owning small to mid-size BI solutions with high accuracy and on time delivery using data sets, queries, reports, dashboards, analyses or components of larger solutions to answer straightforward business questions with data incorporating business intelligence best practices, data management fundamentals, and analysis principles. Good understanding of the relevant data lineage: including sources of data; how metrics are aggregated; and how the resulting business intelligence is consumed, interpreted and acted upon by the business where the end product enables effective, data-driven business decisions. Having high responsibility for the code, queries, reports and analyses that are inherited or produced and having analyses and code reviewed periodically. Effective partnering with peer BIEs and others in your team to troubleshoot, research root causes, propose solutions, by either take ownership for their resolution or ensure a clear hand-off to the right owner. About The Team The Global Operations – Artificial Intelligence (GO-AI) team is an initiative, which remotely handles exceptions in the Amazon Robotic Fulfillment Centers Globally. GO-AI seeks to complement automated vision based decision-making technologies by providing remote human support for the subset of tasks which require higher cognitive ability and cannot be processed through automated decision making with high confidence. This team provides end-to-end solutions through inbuilt competencies of Operations and strong central specialized teams to deliver programs at Amazon scale. It is operating multiple programs including Nike IDS, Proteus, Sparrow and other new initiatives in partnership with global technology and operations teams. Basic Qualifications 2+ years of analyzing and interpreting data with Redshift, Oracle, NoSQL etc. experience Experience building and maintaining basic data artifacts (e.g., ETL, data models, queries) Experience with one or more industry analytics visualization tools (e.g. Excel, Tableau, QuickSight, MicroStrategy, PowerBI) and statistical methods (e.g. t-test, Chi-squared) Experience with scripting language (e.g., Python, Java, or R) Experience applying basic statistical methods (e.g. regression) to difficult business problems Preferred Qualifications Master's degree, or Advanced technical degree Experience with statistical analysis, co-relation analysis Knowledge of how to improve code quality and optimizes BI processes (e.g. speed, cost, reliability) Excellence in technical communication with peers, partners, and non-technical cohorts Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - Amazon Development Centre (India) Private Limited Job ID: A2987022 Show more Show less
Posted 1 week ago
4.0 years
0 Lacs
Hyderābād
On-site
Job Summary: We are looking for an experienced Data Engineer with 4+ years of proven expertise in building scalable data pipelines, integrating complex datasets, and working with cloud-based and big data technologies. The ideal candidate should have hands-on experience with data modeling, ETL processes, and real-time data streaming. Key Responsibilities: Design, develop, and maintain scalable and efficient data pipelines and ETL workflows. Work with large datasets from various sources, ensuring data quality and consistency. Collaborate with Data Scientists, Analysts, and Software Engineers to support data needs. Optimize data systems for performance, scalability, and reliability. Implement data governance and security best practices. Troubleshoot data issues and identify improvements in data processes. Automate data integration and reporting tasks. Required Skills & Qualifications: Bachelor’s or Master’s degree in Computer Science, Engineering, or related field. 4+ years of experience in data engineering or similar roles . Strong programming skills in Python , SQL , and Shell scripting . Experience with ETL tools (e.g., Apache Airflow, Talend, AWS Glue). Proficiency in data modeling , data warehousing , and database design . Hands-on experience with cloud platforms (AWS, GCP, or Azure) and services like S3, Redshift, BigQuery, Snowflake . Experience with big data technologies such as Spark, Hadoop, Kafka . Strong understanding of data structures, algorithms , and system design . Familiarity with CI/CD tools , version control (Git), and Agile methodologies. Preferred Skills: Experience with real-time data streaming (Kafka, Spark Streaming). Knowledge of Docker , Kubernetes , and infrastructure-as-code tools like Terraform . Exposure to machine learning pipelines or data science workflows is a plus. Interested candidates can send their resume Job Type: Full-time Schedule: Day shift Work Location: In person
Posted 1 week ago
6.0 years
9 - 10 Lacs
Gurgaon
On-site
Expedia Group brands power global travel for everyone, everywhere. We design cutting-edge tech to make travel smoother and more memorable, and we create groundbreaking solutions for our partners. Our diverse, vibrant, and welcoming community is essential in driving our success. Why Join Us? To shape the future of travel, people must come first. Guided by our Values and Leadership Agreements, we foster an open culture where everyone belongs, differences are celebrated and know that when one of us wins, we all win. We provide a full benefits package, including exciting travel perks, generous time-off, parental leave, a flexible work model (with some pretty cool offices), and career development resources, all to fuel our employees' passion for travel and ensure a rewarding career journey. We’re building a more open world. Join us. Data Engineer III Expedia Group’s CTO Enablement team is looking for a highly motivated Data Engineer III to lead the design, delivery, and stewardship of business-critical data infrastructure that powers our Capitalization program and Business Operations functions . This role is at the intersection of finance, strategy, and engineering , where data precision and operational rigor directly support the company’s financial integrity and execution effectiveness. You will collaborate with stakeholders across Finance, BizOps, and Technology to build scalable data solutions that ensure capitalization accuracy, enable deep operational analytics, and streamline financial and business reporting at scale. What you will do: Design, build, and maintain high-scale data pipelines and transformation logic to support CapEx/OpEx classification, capitalization tracking, and operational data modeling. Deliver clean, well-documented, governed datasets that drive finance reporting, strategic planning, and key operational dashboards. Partner with cross-functional teams (Finance, Engineering, Strategy) to translate business and compliance requirements into technical solutions. Lead the development of data models and ETL processes to support performance monitoring, workforce utilization, project financials, and business KPIs. Establish and enforce data quality, lineage, and access control standards to ensure trust in business-critical data. Proactively identify and resolve data reliability issues related to financial close processes, budget tracking, and capitalization rules. Serve as a technical advisor to BizOps and Finance stakeholders, recommending improvements in tooling, architecture, and process automation. Mentor other engineers and contribute to the growth of a high-performance data team culture. Who you are: 6+ years of experience in data engineering , analytics engineering , or data infrastructure roles with a focus on operational and financial data. Expertise in SQL and Python , and experience with data pipeline orchestration tools such as Airflow , dbt , or equivalent. Strong understanding of cloud-based data platforms (e.g., Snowflake, BigQuery, Redshift, or Databricks). Deep familiarity with capitalization standards , CapEx/OpEx distinction, and operational reporting in a tech-driven environment. Demonstrated ability to build scalable, reliable ETL/ELT workflows that serve diverse analytical and reporting needs. Experience working cross-functionally in complex organizations with multiple stakeholder groups. Passion for operational excellence, data governance, and driving actionable business insights from data. Preferred qualifications: Experience supporting BizOps , FP&A , or Product Finance teams with data tooling and reporting. Familiarity with BI platforms like Looker , Power BI , or Tableau . Exposure to agile delivery frameworks and enterprise-level operational rhythms. Accommodation requests If you need assistance with any part of the application or recruiting process due to a disability, or other physical or mental health conditions, please reach out to our Recruiting Accommodations Team through the Accommodation Request. We are proud to be named as a Best Place to Work on Glassdoor in 2024 and be recognized for award-winning culture by organizations like Forbes, TIME, Disability:IN, and others. Expedia Group's family of brands includes: Brand Expedia®, Hotels.com®, Expedia® Partner Solutions, Vrbo®, trivago®, Orbitz®, Travelocity®, Hotwire®, Wotif®, ebookers®, CheapTickets®, Expedia Group™ Media Solutions, Expedia Local Expert®, CarRentals.com™, and Expedia Cruises™. © 2024 Expedia, Inc. All rights reserved. Trademarks and logos are the property of their respective owners. CST: 2029030-50 Employment opportunities and job offers at Expedia Group will always come from Expedia Group’s Talent Acquisition and hiring teams. Never provide sensitive, personal information to someone unless you’re confident who the recipient is. Expedia Group does not extend job offers via email or any other messaging tools to individuals with whom we have not made prior contact. Our email domain is @expediagroup.com. The official website to find and apply for job openings at Expedia Group is careers.expediagroup.com/jobs. Expedia is committed to creating an inclusive work environment with a diverse workforce. All qualified applicants will receive consideration for employment without regard to race, religion, gender, sexual orientation, national origin, disability or age. India - Haryana - Gurgaon Technology Full-Time Regular 06/10/2025 ID # R-95367
Posted 1 week ago
5.0 - 8.0 years
0 - 0 Lacs
Gurgaon
On-site
Job Title: Python Developer Experience: 5-8 Years Location: Gurgaon -hybrid Notice Period: Immediate to 15days Mandatory skills: Python, Java and JavaScript, AWS-Yrs experience (AWS Python Library- like EC2, S3, Lambda, DynamoDB, SQS, SNS,) SQL key skills and qualifications we are looking for: Over 4 years of software development experience, with expertise in Python and familiarity with other programming languages such as Java and JavaScript. A minimum of 2 years of significant hands-on experience with AWS services, including Lambda and Step Functions.rience on the Zuora platform (Billing and Revenue) being highly desirable. At least 2 years of working knowledge in SQL. Solid experience working with AWS cloud services, especially S3, Glue, Lambda, Redshift, and Athena. Experience with continuous integration/delivery (CI/CD) tools like Jenkins and Terraform. Excellent communication skills are essential. Design and implement backend services and APIs using Python . Build and maintain CI/CD pipelines using tools like GitHub Actions, AWS CodePipeline , or Jenkins . Optimize performance, scalability, and security of cloud applications. Implement logging, monitoring, and alerting for production workloads. Design, develop, and maintain scalable backend services using Python and Java . Develop responsive and user-friendly frontend interfaces using JavaScript , Collaborate with cross-functional teams to define, design, and deliver new features. Write clean, maintainable, and well-tested code. Troubleshoot, debug, and optimize application performance. Participate in code reviews and follow best development practices. Job Types: Full-time, Permanent Pay: ₹60,000.00 - ₹65,000.00 per month Location Type: In-person Schedule: Day shift Work Location: In person Application Deadline: 14/06/2025
Posted 1 week ago
0 years
4 - 7 Lacs
Gurgaon
On-site
A Software Engineer is curious and self-driven to build and maintain multi-terabyte operational marketing databases and integrate them with cloud technologies. Our databases typically house millions of individuals and billions of transactions and interact with various web services and cloud-based platforms. Once hired, the qualified candidate will be immersed in the development and maintenance of multiple database solutions to meet global client business objectives Job Description: Key responsibilities: Have 2 – 4 yrs exp Will work in close Supervision of Tech Leads/ Lead Devs Should able to understand detailed design with minimal explanation. Individual Contributor. Resource will able to perform mid to complex level tasks with minimal supervision. Senior team members will peer review assigned tasks. Build and configure our Marketing Database/Data environment platform by integrating feeds as per detailed design/transformation logic. Good knowledge of Unix scripting &/or Python Must have strong knowledge in SQL Good understanding of ETL (Talend, Informatica, Datastage, Ab Initio etc) as well as database skills (Oracle, SQL server, Teradata, Vertica, redshift, Snowflake, Big query, Azure DW etc). Fair understanding of relational databases, stored procs etc. Experience in Cloud computing (one or more of AWS, Azure, GCP) will be plus. Less supervision & guidance from senior resources will be required. Location: DGS India - Gurugram - Golf View Corporate Towers Brand: Merkle Time Type: Full time Contract Type: Permanent
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
Delhi
On-site
Delhi / Bangalore Engineering / Full Time / Hybrid What is Findem: Findem is the only talent data platform that combines 3D data with AI. It automates and consolidates top-of-funnel activities across your entire talent ecosystem, bringing together sourcing, CRM, and analytics into one place. Only 3D data connects people and company data over time - making an individual’s entire career instantly accessible in a single click, removing the guesswork, and unlocking insights about the market and your competition no one else can. Powered by 3D data, Findem’s automated workflows across the talent lifecycle are the ultimate competitive advantage. Enabling talent teams to deliver continuous pipelines of top, diverse candidates while creating better talent experiences, Findem transforms the way companies plan, hire, and manage talent. Learn more at www.findem.ai Experience - 5 - 9 years We are looking for an experienced Big Data Engineer, who will be responsible for building, deploying and managing various data pipelines, data lake and Big data processing solutions using Big data and ETL technologies. Location- Delhi, India Hybrid- 3 days onsite Responsibilities Build data pipelines, Big data processing solutions and data lake infrastructure using various Big data and ETL technologies Assemble and process large, complex data sets that meet functional non-functional business requirements ETL from a wide variety of sources like MongoDB, S3, Server-to-Server, Kafka etc., and processing using SQL and big data technologies Build analytical tools to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics Build interactive and ad-hoc query self-serve tools for analytics use cases Build data models and data schema for performance, scalability and functional requirement perspective Build processes supporting data transformation, metadata, dependency and workflow management Research, experiment and prototype new tools/technologies and make them successful Skill Requirements Must have-Strong in Python/Scala Must have experience in Big data technologies like Spark, Hadoop, Athena / Presto, Redshift, Kafka etc Experience in various file formats like parquet, JSON, Avro, orc etc Experience in workflow management tools like airflow Experience with batch processing, streaming and message queues Any of visualization tools like Redash, Tableau, Kibana etc Experience in working with structured and unstructured data sets Strong problem solving skills Good to have Exposure to NoSQL like MongoDB Exposure to Cloud platforms like AWS, GCP, etc Exposure to Microservices architecture Exposure to Machine learning techniques The role is full-time and comes with full benefits. We are globally headquartered in the San Francisco Bay Area with our India headquarters in Bengaluru. Equal Opportunity As an equal opportunity employer, we do not discriminate on the basis of race, color, religion, national origin, age, sex (including pregnancy), physical or mental disability, medical condition, genetic information, gender identity or expression, sexual orientation, marital status, protected veteran status or any other legally-protected characteristic.
Posted 1 week ago
5.0 years
4 - 6 Lacs
Noida
On-site
- - 5+ years data engineering experience. - - Extensive experience writing SQL queries and stored procedures. - - Experience with big data tools and distributed computing. - - Finance experience, exhibiting knowledge of financial reporting, budgeting and forecasting functions and processes. - - Bachelors degree. Are you a highly skilled data engineer and project leader? Do you think big, enjoy complexity and building solutions that scale? Are you curious to know what you could achieve in a company that pushes the boundaries of modern technology? If you answered yes and you have a background in FinTech you’ll love this role and Amazon’s data obsessed culture. Amazon Devices and Services Fintech is the global team that designs and builds the financial planning and analysis tools for wide variety of Amazon’s new and established organizations. From Kindle to Ring and even new and exciting companies like Kuiper (our new interstellar satellite play) this team enjoys a wide variety of complex and interesting problem spaces. They are almost like FinTech consultants embedded in Amazon. This team are looking for a Data Engineer to build and enhance the businesses finance systems with TM1 at its core. You will manage all aspects from requirements gathering, technical design, development, deployment, and integration to solve budgeting, planning, performance management and reporting challenges Key job responsibilities Design and implement next generation financial solutions assisted by almost unlimited access to AWS resources including EC2, RDS, Redshift, Stepfunctions, EMR, Lambda and 3rd party software TM1. Build and deliver high quality data pipelines capable of scaling from running for a single month of data during month end close to 150 and more months when doing restatements. Continually improve ongoing reporting and analysis processes and infrastructure, automating or simplifying self-service capabilities for customers. Dive deep to resolve problems at their root, looking for failure patterns and suggesting and implementing fixes or enhancements. Prepare runbooks, methods of procedures, tutorials, training videos on best practices for global delivery. Solve unique challenges presented by the massive data volume and diverse data sets working for one of the largest companies in the wo Experience with non-relational databases / data stores (object storage, document or key-value stores, graph databases, column-family databases) - - Experience with AWS technologies like Redshift, S3, AWS Glue, EMR, Kinesis, FireHose, Lambda, and IAM roles and permissions. - - Experience with programming languages such as python, java shell scripts. - - Experience with IBM Planning Analytics/TM1 both scripting processes and writing rules. - - Experience with design & delivery of formal training curriculum and programs. - - Project management, scoping, reporting, and scheduling experience. Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.
Posted 1 week ago
7.0 years
0 Lacs
Andhra Pradesh
On-site
Key Responsibilities Design and develop high-volume, data engineering solutions for mission-critical systems with quality. Making enhancements to various applications that meets business and auditing requirements. Research and evaluate alternative solutions and make recommendations on improving the product to meet business and information risk requirements. Evaluate service level issues and suggested enhancements to diagnose and address underlying system problems and inefficiencies. Participate in full development lifecycle activities for the product (coding, testing, release activities). Support Release activities on weekends as required. Support any application issues reported during weekends. Coordinating day-To-day activities for multiple projects with onshore and offshore team members. Ensuring the availability of platform in lower environments Required Qualifications 7+ years of overall IT experience, which includes hands on experience in Big Data technologies. Mandatory - Hands on experience in Python and PySpark. Build pySpark applications using Spark Dataframes in Python using Jupyter notebook and PyCharm(IDE). Worked on optimizing spark jobs that processes huge volumes of data. Hands on experience in version control tools like Git. Worked on Amazon’s Analytics services like Amazon EMR, Amazon Athena, AWS Glue. Worked on Amazon’s Compute services like Amazon Lambda, Amazon EC2 and Amazon’s Storage service like S3 and few other services like SNS. Experience/knowledge of bash/shell scripting will be a plus. Has built ETL processes to take data, copy it, structurally transform it etc. involving a wide variety of formats like CSV, TSV, XML and JSON. Experience in working with fixed width, delimited , multi record file formats etc. Good to have knowledge of datawarehousing concepts – dimensions, facts, schemas- snowflake, star etc. Have worked with columnar storage formats- Parquet,Avro,ORC etc. Well versed with compression techniques – Snappy, Gzip. Good to have knowledge of AWS databases (atleast one) Aurora, RDS, Redshift, ElastiCache, DynamoDB. Hands on experience in tools like Jenkins to build, test and deploy the applications Awareness of Devops concepts and be able to work in an automated release pipeline environment. Excellent debugging skills. Preferred Qualifications Experience working with US Clients and Business partners. Knowledge on Front end frameworks. Exposure to BFSI domain is a good to have. Hands on experience on any API Gateway and management platform. About Our Company Ameriprise India LLP has been providing client based financial solutions to help clients plan and achieve their financial objectives for 125 years. We are a U.S. based financial planning company headquartered in Minneapolis with a global presence. The firm’s focus areas include Asset Management and Advice, Retirement Planning and Insurance Protection. Be part of an inclusive, collaborative culture that rewards you for your contributions and work with other talented individuals who share your passion for doing great work. You’ll also have plenty of opportunities to make your mark at the office and a difference in your community. So if you're talented, driven and want to work for a strong ethical company that cares, take the next step and create a career at Ameriprise India LLP. Ameriprise India LLP is an equal opportunity employer. We consider all qualified applicants without regard to race, color, religion, sex, genetic information, age, sexual orientation, gender identity, disability, veteran status, marital status, family status or any other basis prohibited by law. Full-Time/Part-Time Full time Timings (2:00p-10:30p) India Business Unit AWMPO AWMP&S President's Office Job Family Group Technology
Posted 1 week ago
3.0 years
0 Lacs
Kochi, Kerala, India
Remote
AWS Data Engineer Location: Remote (India) Experience: 3+ Years Employment Type: Full-Time About the Role: We are seeking a talented AWS Data Engineer with at least 3 years of hands-on experience in building and managing data pipelines using AWS services. This role involves working with large-scale data, integrating multiple data sources (including sensor/IoT data), and enabling efficient, secure, and analytics-ready solutions. Experience in the energy industry or working with time-series/sensor data is a strong plus. Key Responsibilities: Build and maintain scalable ETL/ELT data pipelines using AWS Glue, Redshift, Lambda, EMR, S3, and Athena Process and integrate structured and unstructured data, including sensor/IoT and real-time streams Optimize pipeline performance and ensure reliability and fault tolerance Collaborate with cross-functional teams including data scientists and analysts Perform data transformations using Python, Pandas, and SQL Maintain data integrity, quality, and security across the platform Use Terraform and CI/CD tools (e.g., Azure DevOps) for infrastructure and deployment automation Support and monitor pipeline workflows, troubleshoot issues, and implement fixes Contribute to the adoption of emerging tools like AWS Bedrock, Textract, Rekognition, and GenAI solutions Required Skills and Qualifications: Bachelor’s or Master’s degree in Computer Science, Information Technology, or related field 3+ years of experience in data engineering using AWS Strong skills in: AWS Glue, Redshift, S3, Lambda, EMR, Athena Python, Pandas, SQL RDS, Postgres, SAP HANA Solid understanding of data modeling, warehousing, and pipeline orchestration Experience with version control (Git) and infrastructure as code (Terraform) Preferred Skills: Experience working with energy sector dat a or IoT/sensor-based dat aExposure to machine learnin g tools and frameworks (e.g., SageMaker, TensorFlow, Scikit-learn )Familiarity with big data technologie s like Apache Spark, Kafk aExperience with data visualization tool s (Tableau, Power BI, AWS QuickSight )Awareness of data governance and catalog tool s such as AWS Data Quality, Collibra, and AWS Databre wAWS Certifications (Data Analytics, Solutions Architect ) Show more Show less
Posted 1 week ago
7.0 - 12.0 years
15 - 27 Lacs
Bengaluru
Hybrid
Labcorp is hiring a Senior Data engineer. This person will be an integrated member of Labcorp Data and Analytics team and work within the IT team. Play a crucial role in designing, developing and maintaining data solutions using Databricks, Fabric, Spark, PySpark and Python. Responsible to review business requests and translate them into technical solution and technical specification. In addition, work with team members to mentor fellow developers to grow their knowledge and expertise. Work in a fast paced and high-volume processing environment, where quality and attention to detail are vital. RESPONSIBILITIES: Design and implement end-to-end data engineering solutions by leveraging the full suite of Databricks, Fabric tools, including data ingestion, transformation, and modeling. Design, develop and maintain end-to-end data pipelines by using spark, ensuring scalability, reliability, and cost optimized solutions. Conduct performance tuning and troubleshooting to identify and resolve any issues. Implement data governance and security best practices, including role-based access control, encryption, and auditing. Work in fast-paced environment and perform effectively in an agile development environment. REQUIREMENTS: 8+ years of experience in designing and implementing data solutions with at least 4+ years of experience in data engineering. Extensive experience with Databricks, Fabric, including a deep understanding of its architecture, data modeling, and real-time analytics. Minimum 6+ years of experience in Spark, PySpark and Python. Must have strong experience in SQL, Spark SQL, data modeling & RDBMS concepts. Strong knowledge of Data Fabric services, particularly Data engineering, Data warehouse, Data factory, and Real- time intelligence. Strong problem-solving skills, with ability to perform multi-tasking. Familiarity with security best practices in cloud environments, Active Directory, encryption, and data privacy compliance. Communicate effectively in both oral and written. Experience in AGILE development, SCRUM and Application Lifecycle Management (ALM). Preference given to current or former Labcorp employees. EDUCATION: Bachelors in engineering, MCA.
Posted 1 week ago
3.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Integration Development: Design and implement integration solutions using MuleSoft Anypoint Platform for various enterprise applications, including ERP, CRM, and third-party systems. API Management: Develop and manage APIs using MuleSofts API Gateway, ensuring best practices for API design, security, and monitoring. MuleSoft Anypoint Studio: Develop, deploy, and monitor MuleSoft applications using Anypoint Studio and Anypoint Management Console. Data Transformation: Use MuleSofts DataWeave to transform data between various formats (XML, JSON, CSV, etc.) as part of integration solutions. Troubleshooting and Debugging: Provide support in troubleshooting and resolving integration issues and ensure the solutions are robust and scalable. Collaboration: Work closely with other developers, business analysts, and stakeholders to gather requirements, design, and implement integration solutions. Documentation: Create and maintain technical documentation for the integration solutions, including API specifications, integration architecture, and deployment processes. Best Practices: Ensure that the integrations follow industry best practices and MuleSofts guidelines for designing and implementing scalable and secure solutions. Required Qualifications Bachelor degree in computer science, Information Technology, or a related field. 3+ years of experience in MuleSoft development and integration projects. Proficiency in MuleSoft Anypoint Platform, including Anypoint Studio, Anypoint Exchange, and Anypoint Management Console. Strong knowledge of API design and management, including REST, SOAP, and Web Services. Proficiency in DataWeave for data transformation. Hands-on experience with integration patterns and technologies such as JMS, HTTP/HTTPS, File, Database, and Cloud integrations. Experience with CI/CD pipelines and deployment tools such as Jenkins, Git, and Maven. Good understanding of cloud platforms (AWS, Azure, or GCP) and how MuleSoft integrates with cloud services. Excellent troubleshooting and problem-solving skills. Strong communication skills and the ability to work effectively in a team environment.Strong working knowledge of modern programming languages, ETL/Data Integration tools (preferably SnapLogic) and understanding of Cloud Concepts. SSL/TLS, SQL, REST, JDBC, JavaScript, JSON Has Strong hands-on experience in Snaplogic Design/Development. Has good working experience using various snaps for JDBC, SAP, Files, Rest, SOAP, etc. Good to have the ability to build complex mappings with JSON path expressions, and Python scripting. Good to have experience in ground plex and cloud plex integrations. Has Strong hands-on experience in Snaplogic Design/Development. Has good working experience using various snaps for JDBC, SAP, Files, Rest, SOAP, etc. Should be able to deliver the project by leading a team of the 6-8member team. Should have had experience in integration projects with heterogeneous landscapes. Good to have the ability to build complex mappings with JSON path expressions, flat files and cloud. Good to have experience in ground plex and cloud plex integrations. Experience in one or more RDBMS (Oracle, DB2, and SQL Server, PostgreSQL and RedShift) Real-time experience working in OLAP & OLTP database models (Dimensional models). Show more Show less
Posted 1 week ago
6.0 years
0 Lacs
Jaipur, Rajasthan, India
On-site
About Hakkoda Hakkoda, an IBM Company, is a modern data consultancy that empowers data driven organizations to realize the full value of the Snowflake Data Cloud. We provide consulting and managed services in data architecture, data engineering, analytics and data science. We are renowned for bringing our clients deep expertise, being easy to work with, and being an amazing place to work! We are looking for curious and creative individuals who want to be part of a fast-paced, dynamic environment, where everyone’s input and efforts are valued. We hire outstanding individuals and give them the opportunity to thrive in a collaborative atmosphere that values learning, growth, and hard work. Our team is distributed across North America, Latin America, India and Europe. If you have the desire to be a part of an exciting, challenging, and rapidly-growing Snowflake consulting services company, and if you are passionate about making a difference in this world, we would love to talk to you!. We are looking for a skilled and motivated Data Analyst / Data Engineer to join our growing data team in Jaipur. The ideal candidate should have hands-on experience with SQL, Python, Power BI , and familiarity with Snowflake is a strong advantage. You will play a key role in building data pipelines, delivering analytical insights, and enabling data-driven decision-making across the organization. Role Description Develop and manage robust data pipelines and workflows for data integration, transformation, and loading. Design, build, and maintain interactive Power BI dashboards and reports based on business needs. Optimize existing Power BI reports for performance, usability, and scalability. Write and optimize complex SQL queries for data analysis and reporting. Use Python for data manipulation, automation, and advanced analytics where applicable. Collaborate with business stakeholders to understand requirements and deliver actionable insights. Ensure high data quality, integrity, and governance across all reporting and analytics layers. Work closely with data engineers, analysts, and business teams to deliver scalable data solutions. Leverage cloud data platforms like Snowflake for data warehousing and analytics (good to have). Qualifications 3–6 years of professional experience in data analysis or data engineering. Bachelor’s degree in computer science, Engineering, Data Science, Information Technology, or a related field. Strong proficiency in SQL with the ability to write complex queries and perform data modeling. Hands-on experience with Power BI for data visualization and business intelligence reporting. Programming knowledge in Python for data processing and analysis. Good understanding of ETL/ELT, data warehousing concepts, and cloud-based data ecosystems. Excellent problem-solving skills, attention to detail, and analytical thinking. Strong communication and interpersonal skills to work effectively with cross-functional teams. Preferred / Good To Have Experience working with large datasets and cloud platforms like Snowflake, Redshift, or BigQuery. Familiarity with workflow orchestration tools (e.g., Airflow) and version control systems (e.g., Git). Power BI Certification (e.g., PL-300: Microsoft Power BI Data Analyst). Exposure to Agile methodologies and end-to-end BI project life cycles. Benefits Health Insurance Paid leave Technical training and certifications Robust learning and development opportunities Incentive Toastmasters Food Program Fitness Program Referral Bonus Program Hakkoda is committed to fostering diversity, equity, and inclusion within our teams. A diverse workforce enhances our ability to serve clients and enriches our culture. We encourage candidates of all races, genders, sexual orientations, abilities, and experiences to apply, creating a workplace where everyone can succeed and thrive. Ready to take your career to the next level? 🚀 💻 Apply today👇 and join a team that’s shaping the future!! Hakkoda is an IBM subsidiary which has been acquired by IBM and will be integrated in the IBM organization. Hakkoda will be the hiring entity. By Proceeding with this application, you understand that Hakkoda will share your personal information with other IBM subsidiaries involved in your recruitment process, wherever these are located. More information on how IBM protects your personal information, including the safeguards in case of cross-border data transfer, are available here. Show more Show less
Posted 1 week ago
3.0 years
0 Lacs
Gurugram, Haryana, India
On-site
What you need to succeed in this job ? MS or BS/B.Tech in computer science or equivalent experience from top college. Experience with containers and their orchestrations using Dockers. Hands-on AWS Technical Architect/Technical Lead-Associate with 3+ Years in developing and assisting Architecting enterprise level large scale multi-tier solutions that require complex Architectural decisions. Hands-on experience on implementing Cloud Solutions using various AWS Services including EC2, EMR, VPC, S3, Glacier, Lambda, Directory Services, Cloud Formation, Ops works, CodePipeline, CodeBuild, CodeDeploy, RDS, Data Pipeline, DynamoDB, Redshift etc. Hands-on experience on Architecting and securing the Infrastructure on AWS using IAM, KMS, Cognito, API Gateway, Cloud Trail, Cloud Watch, Config, Trusted Advisor, Security Groups, NACL etc. Strong experience in Major AWS services like, Cloud Front, Cloud Watch, Cloud Trail, VPC, RDS, DynamoDB, SQS, SNS, Athena, Good knowledge on Application Migrations and Data migrations from On-premise to AWS Cloud. Experience on infrastructure with Docker containerization, Kubernetes Strong experience in Terraform, Cloud Formation, Python for Infrastructure Automation Experience in, Amazon S3 for storage, SNS, Cloud Front for accessing and content delivery(CDN)and VPC for network security access as per requirement. Experience in Encryption of data in motion and data at rest Experience in Amazon GuardDuty, AWS Secrets Manager Ability to write technical documentation (platform architecture, strategy, engineering setup) Familiar with modern technologies/frameworks and software development/delivery methodologies: REST, OAuth, Spring, Kafka, NoSQL, Redis, PostgreSQL, ElasticSearch(ELK). It will be an added advantage – Azure cloud experience, Migration of data from Cloud to On-premise or between different cloud providers. Soft Skills: Strong communication and inter-personal skills. Ability to work in a fast paced, high energy environment. Show more Show less
Posted 1 week ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
P2-C2-STS You are passionate about driving SRE / DevSecOps mindset and culture in a fast-paced, challenging environment where you get the opportunity to work with a spectrum of latest tools and technologies to drive forward Automation, Observability and CI/CD automation You are actively looking to improve implemented solutions, understand the efficacy of collaboration, work with cross functional teams to build and improve CI/CD pipeline and improve automation (reduce Toil). As a member of this team, you possess the ability to inspire and leverage your experience to inject new knowledge and skills into an already high performing team. Help Identifying areas of improvement, especially when it comes to Observability, Proactiveness, Automation & Toil Management. Strategic approach with clear objectives to improve System Availability, Performance Optimization, and improve Incident MTTR. Build and maintain Reliable Engineering Systems using SRE and DevSecOps models with special focus on Event Management (monitoring/alerts), Self Healing and Reliability testing. Work in collaboration with Application Development, Quality, Product and Data Engineering teams to Champion SRE/ DevOps culture and practices. Strategic approach with clear objectives to improve service / product Availability, Performance Optimization, improve Incident MTTR, Change Success Rate and ensure feedback loop to Dev Build and maintain Reliable Systems and platforms using SRE and DevSecOps principles with special focus on Observability, Resiliency (proactive impact prevention), Self Healing and Reliability testing Work with App & Business teams to establish (SLO/SLI), SRE Dashboards that provide multiple views (LOB, business process or App) view to track value and enable effective decision making Innovative approach to Reliability, from Arch and feasibility phase to Operation Continuous Improvement following product model and Agile methodologies. Skill Database (Redshift, RDS, Aroura) Dynatrace in depth experience Coding (JSON, CDK). Show more Show less
Posted 1 week ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
Remote
When you join Verizon You want more out of a career. A place to share your ideas freely — even if they’re daring or different. Where the true you can learn, grow, and thrive. At Verizon, we power and empower how people live, work and play by connecting them to what brings them joy. We do what we love — driving innovation, creativity, and impact in the world. Our V Team is a community of people who anticipate, lead, and believe that listening is where learning begins. In crisis and in celebration, we come together — lifting our communities and building trust in how we show up, everywhere & always. Want in? Join the #VTeamLife. What you’ll be doing... Design, build and maintain robust, scalable data pipelines and ETL processes Ensure high data quality, accuracy and integrity across all systems. Work with structure and unstructured data from multiple sources. Optimize data work flows for performance, reliability, and cost efficiency. Collaborate with analysts, data scientists to meet data needs Monitor,troubleshoot, and improve existing data systems and jobs Apply best practices in data governance, security and compliance . Use tools like Spark, Kafka, Airflow, SQL, Python and cloud platforms Stay updated with emerging technologies and continuously improve data infrastructure. What we’re looking for… You Will Need To Have Bachelor's degree or four or more years of work experience. Expertise in AWS Data Stack – Strong hands-on experience with S3, Glue, EMR, Lambda, Kinesis, Redshift, Athena, and IAM security best practices. Big Data & Distributed Computing – Deep understanding of Apache Spark (batch and streaming) large-scale data processing and analytics. Real-Time & Batch Data Processing – Proven experience designing, implementing, and optimizing event-driven and streaming data pipelines using Kafka and Kinesis. ETL/ELT & Data Modeling – Strong experience in architecting and optimizing scalable ETL/ELT pipelines for structured and unstructured data. Programming Skills – Proficiency in Scala and Java for data processing and automation. Database & SQL Optimization – Strong understanding of SQL and experience with relational (PostgreSQL, MySQL). Expertise in SQL query tuning, data warehousing and working with Parquet, Avro, ORC formats. Infrastructure as Code (IaC) & DevOps – Experience with CloudFormation, CDK, and CI/CD pipelines for automated deployments in AWS. Monitoring, Logging & Observability – Familiarity with AWS CloudWatch, Prometheus, or similar monitoring tools. API Integration – Ability to fetch and process data from external APIs and databases. Architecture & Scalability Mindset – Ability to design and optimize data architectures for high-volume, high-velocity, and high-variety datasets. Performance Optimization – Experience in optimizing data pipelines for cost and performance. Cross-Team Collaboration – Work closely with Data Scientists, Analysts, DevOps, and Business Teams to deliver end-to-end data solutions. Even better if you have one or more of the following: Agile & CI/CD Practices – Comfortable working in Agile/Scrum environments, driving continuous integration and continuous deployment. #TPDRNONCDIO Where you’ll be working In this hybrid role, you'll have a defined work location that includes work from home and assigned office days set by your manager. Scheduled Weekly Hours 40 Equal Employment Opportunity Verizon is an equal opportunity employer. We evaluate qualified applicants without regard to race, gender, disability or any other legally protected characteristics. Show more Show less
Posted 1 week ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
P1-C3-STS Seeking a developer who has good Experience in Athena, Python code, Glue, Lambda, DMS , RDS, Redshift Cloud Formation and other AWS serverless resources. Can optimize data models for performance and efficiency. Able to write SQL queries to support data analysis and reporting Design, implement, and maintain the data architecture for all AWS data services. Work with stakeholders to identify business needs and requirements for data-related projects Design and implement ETL processes to load data into the data warehouse Good Experience in Athena, Python code, Glue, Lambda, DMS , RDS, Redshift, Cloud Formation and other AWS serverless resources Show more Show less
Posted 1 week ago
4.0 years
0 Lacs
Gurgaon, Haryana, India
On-site
Job Title: AWS Data Engineer Job Summary We are looking for a skilled Data Engineer to join our team and help build, deploy, and maintain data pipelines and data systems using AWS technologies. In this role, you will be responsible for designing and implementing data integration processes, ensuring data quality, and providing technical expertise in data modeling, ETL, and business intelligence. You will work collaboratively with cross-functional teams to support a range of data-driven projects and initiatives. Responsibilities Data Pipeline Development: Design, develop, test, and deploy data integration processes using AWS services (e.g., Redshift, RDS, Glue, S3, Lambda, Kinesis, Step Function) and other tools. Data Integration: Build and manage data pipelines for batch and real-time data processing. Develop and maintain data models to support business needs and ensure data accuracy. Documentation: Create and maintain technical documentation, including ETL architecture, data integration specifications, and data testing plans. Collaboration: Work with business users and stakeholders to understand data requirements, create data flows, and develop conceptual, logical, and physical data models. Technology Utilization: Leverage AWS technologies and best practices to optimize data processing and integration. Stay current with emerging technologies and recommend innovative solutions. Data Quality: Ensure the accuracy, reliability, and performance of data systems. Implement testing strategies and automation to maintain data integrity. Support & Maintenance: Provide support for data systems and pipelines, troubleshooting and resolving issues as they arise. Perform regular maintenance and updates to ensure optimal performance. Reporting & Analytics: Collaborate with the reporting team to design and implement data solutions that support business intelligence and analytics. Required Qualifications Experience: 4+ years of hands-on experience as a Data Engineer with a strong focus on AWS technologies (e.g., EMR, Redshift, RDS, Glue, S3, Lambda, Athena, Kinesis & Cloud Watch). Technical Skills: Proficient in Python and SQL script creation, and data integration tools. Experience with data modeling and ETL processes. Programming Skills: Experience in programming languages such as Python, PySpark, or Scala. Database Platforms: Experience with major database platforms (e.g., SQL Server, Oracle, Snowflake, Redshift). Orchestration & Automation: Familiarity with orchestration tools (e.g., AWS Data Pipeline, Step Functions), infrastructure automation (e.g. Terraform, CloudFormation) and CI/CD pipelines (e.g. Jenkins, GitLab CI/CD). Build & Test Tools: Working knowledge of build tools (e.g., Maven, Gradle) and testing frameworks (e.g., JUnit, pytest). Documentation: Ability to create comprehensive technical documentation and maintain clear records of data processes and systems. Education: Bachelor’s degree in Computer Science, Information Systems, or a related field. Preferred Skills & Experience Big Data Frameworks: Experience with big data frameworks (e.g., Spark, Hadoop) and related technologies (e.g., pySpark, SparkSQL). Data Integration Processes: Knowledge of data warehousing, data integration, and ETL processes. Communication: Strong communication skills, with the ability to effectively collaborate with team members and stakeholders. Problem-Solving: Demonstrated ability to troubleshoot complex data issues and implement effective solutions. Agile Environment: Experience working in an agile development environment with tools like Azure DevOps or JIRA. If you are a motivated data professional with experience in AWS and a passion for solving complex data challenges, we encourage you to apply for this exciting opportunity. Show more Show less
Posted 1 week ago
3.0 years
0 Lacs
Gurgaon, Haryana, India
On-site
Strong SQL & Reporting analyst having strong expertise in Extract, Transform, Load (ETL) processes, SQL querying, and reporting, with hands-on experience in an AWS cloud environment. Key Responsibilities Design, develop, and maintain scalable ETL workflows to extract data from various sources, transform it based on business requirements, and load it into data warehouses or databases. SQL Querying: Write complex SQL queries to extract, manipulate, and analyze data for reporting and ad-hoc business requests. Reporting: Create, optimize, and automate reports and dashboards using Power BI, or AWS QuickSight to deliver actionable insights to stakeholders. AWS Environment: Understanding of how to leverage AWS services (e.g., S3, Redshift, Glue, Lambda) to store, process, and manage data efficiently in a cloud-based ecosystem. Data Integrity: Ensure data accuracy, consistency, and quality throughout the ETL and reporting processes by implementing validation checks and troubleshooting issues. Collaboration: Work closely with data engineers, business analysts, and stakeholders to understand data needs and deliver tailored solutions. Optimization: Monitor and optimize ETL processes and queries for performance and scalability. Documentation: Maintain clear documentation of ETL processes, data models, and reporting logic for future reference and team knowledge sharing. Soft Skills Communication: Ability to communicate effectively with non-technical stakeholders to understand requirements. Team Collaboration: Experience working in teams using Agile or Scrum methodologies. Technical Skills Experience: 3+ years of experience in ETL development, data analysis, or a similar role. SQL Skills: Advanced proficiency in writing and optimizing complex SQL queries (e.g., joins, subqueries, window functions). AWS Expertise: Hands-on experience with AWS services such as S3, Redshift, Glue, Lambda, or Athena for data storage and processing. Reporting Tools: Proficiency in visualizations and dashboards using Power BI Programming: Familiarity with scripting languages like Python or Bash for automation and data manipulation is a plus. Data Concepts: Strong understanding of data warehousing, data modeling, and ETL best practices. Problem-Solving: Ability to troubleshoot data issues and optimize processes efficiently. Communication: Excellent verbal and written communication skills to collaborate with technical and non-technical stakeholders. Show more Show less
Posted 1 week ago
8.0 years
0 Lacs
Greater Hyderabad Area
On-site
Job Title: Data Engineering Lead Job Type: Full-time Location: Hyderabad Expected Joining Time: Immediate to 30 days Job Description We are looking for an accomplished and dynamic Data Engineering Lead to join our team and drive the design, development, and delivery of cutting-edge data solutions. This role requires a balance of strong technical expertise, strategic leadership, and a consulting mindset. As the Lead Data Engineer, you will oversee the design and development of robust data pipelines and systems, manage and mentor a team of 5 to 7 engineers, and play a critical role in architecting innovative solutions tailored to client needs. You will lead by example, fostering a culture of accountability, ownership, and continuous improvement while delivering impactful, scalable data solutions in a fast-paced, consulting environment. Key Responsibilities Client Collaboration: Act as the primary point of contact for US-based clients, ensuring alignment on project goals, timelines, and deliverables. Engage with stakeholders to understand requirements and ensure alignment throughout the project lifecycle. Present technical concepts and designs to both technical and non-technical audiences. Communicate effectively with stakeholders to ensure alignment on project goals, timelines, and deliverables. Set realistic expectations with clients and proactively address concerns or risks. Data Solution Design and Development: Architect, design, and implement end-to-end data pipelines and systems that handle large-scale, complex datasets. Ensure optimal system architecture for performance, scalability, and reliability. Evaluate and integrate new technologies to enhance existing solutions. Implement best practices in ETL/ELT processes, data integration, and data warehousing. Project Leadership and Delivery: Lead technical project execution, ensuring timelines and deliverables are met with high quality. Collaborate with cross-functional teams to align business goals with technical solutions. Act as the primary point of contact for clients, translating business requirements into actionable technical strategies. Team Leadership and Development: Manage, mentor, and grow a team of 5 to 7 data engineers; Ensure timely follow-ups on action items and maintain seamless communication across time zones. Conduct code reviews, validations, and provide feedback to ensure adherence to technical standards. Provide technical guidance and foster an environment of continuous learning, innovation, and collaboration. Support collaboration and alignment between the client and delivery teams. Optimization and Performance Tuning: Be hands-on in developing, testing, and documenting data pipelines and solutions as needed. Analyze and optimize existing data workflows for performance and cost-efficiency. Troubleshoot and resolve complex technical issues within data systems. Adaptability and Innovation: Embrace a consulting mindset with the ability to quickly learn and adopt new tools, technologies, and frameworks. Identify opportunities for innovation and implement cutting-edge technologies in data engineering. Exhibit a "figure it out" attitude, taking ownership and accountability for challenges and solutions. Learning and Adaptability: Stay updated with emerging data technologies, frameworks, and tools. Actively explore and integrate new technologies to improve existing workflows and solutions. Internal Initiatives and Eminence Building: Drive internal initiatives to improve processes, frameworks, and methodologies. Contribute to the organization’s eminence by developing thought leadership, sharing best practices, and participating in knowledge-sharing activities. Qualifications Education: Bachelor’s or master’s degree in computer science, Data Engineering, or a related field. Certifications in cloud platforms such as Snowflake Snowpro, Data Engineer is a plus. Experience: 8+ years of experience in data engineering with hands-on expertise in data pipeline development, architecture, and system optimization. Demonstrated success in managing global teams, especially across US and India time zones. Proven track record in leading data engineering teams and managing end-to-end project delivery. Strong background in data warehousing and familiarity with tools such as Matillion, dbt, Striim, etc. Technical Skills: Lead the design, development, and deployment of scalable data architectures, pipelines, and processes tailored to client needs Expertise in programming languages such as Python, Scala, or Java. Proficiency in designing and delivering data pipelines in Cloud Data Warehouses (e.g., Snowflake, Redshift), using various ETL/ELT tools such as Matillion, dbt, Striim, etc. Solid understanding of database systems (relational and NoSQL) and data modeling techniques. Hands-on experience of 2+ years in designing and developing data integration solutions using Matillion and/or dbt. Strong knowledge of data engineering and integration frameworks. Expertise in architecting data solutions. Successfully implemented at least two end-to-end projects with multiple transformation layers. Good grasp of coding standards, with the ability to define standards and testing strategies for projects. Proficiency in working with cloud platforms (AWS, Azure, GCP) and associated data services. Enthusiastic about working in Agile methodology. Possess a comprehensive understanding of the DevOps process including GitHub integration and CI/CD pipelines. Soft Skills: Exceptional problem-solving and analytical skills. Strong communication and interpersonal skills to manage client relationships and team dynamics. Ability to thrive in a consulting environment, quickly adapting to new challenges and domains. Ability to handle ambiguity and proactively take ownership of challenges. Demonstrated accountability, ownership, and a proactive approach to solving problems. Why Join Us? Be at the forefront of data innovation and lead impactful projects. Work with a collaborative and forward-thinking team. Opportunity to mentor and develop talent in the data engineering space. Competitive compensation and benefits package. A dynamic environment where your contributions directly shape the future of data driven decision-making. About Us Logic Pursuits provides companies with innovative technology solutions for everyday business problems. Our passion is to help clients become intelligent, information-driven organizations, where fact-based decision-making is embedded into daily operations, which leads to better processes and outcomes. Our team combines strategic consulting services with growth-enabling technologies to evaluate risk, manage data, and leverage AI and automated processes more effectively. With deep, big four consulting experience in business transformation and efficient processes, Logic Pursuits is a game-changer in any operations strategy. Show more Show less
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
The job market for redshift professionals in India is growing rapidly as more companies adopt cloud data warehousing solutions. Redshift, a powerful data warehouse service provided by Amazon Web Services, is in high demand due to its scalability, performance, and cost-effectiveness. Job seekers with expertise in redshift can find a plethora of opportunities in various industries across the country.
The average salary range for redshift professionals in India varies based on experience and location. Entry-level positions can expect a salary in the range of INR 6-10 lakhs per annum, while experienced professionals can earn upwards of INR 20 lakhs per annum.
In the field of redshift, a typical career path may include roles such as: - Junior Developer - Data Engineer - Senior Data Engineer - Tech Lead - Data Architect
Apart from expertise in redshift, proficiency in the following skills can be beneficial: - SQL - ETL Tools - Data Modeling - Cloud Computing (AWS) - Python/R Programming
As the demand for redshift professionals continues to rise in India, job seekers should focus on honing their skills and knowledge in this area to stay competitive in the job market. By preparing thoroughly and showcasing their expertise, candidates can secure rewarding opportunities in this fast-growing field. Good luck with your job search!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.