Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
7.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Technology @Dream11: Technology is at the core of everything we do. Our technology team helps us deliver a mobile-first experience across platforms (Android & iOS) while managing over 700 million rpm (requests per minute) at peak with a user concurrency of over 16.5 million. We have over 190+ micro-services written in Java and backed by a Vert.x framework. These work with isolated product features with discrete architectures to cater to the respective use cases. We work with terabytes of data, the infrastructure for which is built on top of Kafka, Redshift, Spark, Druid, etc. and it powers a number of use cases like Machine Learning and Predictive Analytics. Our tech stack is hosted on AWS, with distributed systems like Cassandra, Aerospike, Akka, Voltdb, Ignite, etc. Your Role: Lead the design, development, and delivery of software systems, spanning frontend (UI/UX, client-side) and backend (APIs, microservices, infrastructure) using agile SDLC processes. Oversee code reviews, testing, deployment, and monitoring to ensure scalable, high-quality solutions. Troubleshoot issues—frontend rendering bugs to backend outages—conduct root cause analysis, and draft incident reports. Partner with stakeholders to prioritize roadmaps, manage backlogs, and address technical debt. Estimate effort, set timelines, and communicate transparently about risks and delays. Hire, mentor, and grow top engineering talent to elevate operational excellence. Qualifiers: 7+ years of software engineering experience with expertise in either backend or frontend development 1+ years managing technical teams across frontend, backend, or full-stack domains. Working knowledge of cloud platforms (e.g., AWS, GCP, Azure) and deployment practices. About Dream Sports: Dream Sports is India’s leading sports technology company with 250 million users, housing brands such as Dream11 , the world’s largest fantasy sports platform, FanCode , a premier sports content & commerce platform and DreamSetGo , a sports experiences platform. Dream Sports is based in Mumbai and has a workforce of close to 1,000 ‘Sportans’. Founded in 2008 by Harsh Jain and Bhavit Sheth, Dream Sports’ vision is to ‘Make Sports Better’ for fans through the confluence of sports and technology. For more information: https://dreamsports.group/ Dream11 is the world’s largest fantasy sports platform with 230 million users playing fantasy cricket, football, basketball & hockey on it. Dream11 is the flagship brand of Dream Sports, India’s leading Sports Technology company and has partnerships with several national & international sports bodies and cricketers. Show more Show less
Posted 2 weeks ago
0 years
0 Lacs
Kochi, Kerala, India
On-site
Role Description Design, implement and maintain scalable along with robust data architectures on AWS. Utilize AWS services such as S3, Glue and Redshift for data storage processing and analytics. Develop and implement ETL processes using Pyspark to extract, transform and load data into AWS data storage solutions. Collaborate with cross-functional teams to ensure seamless data flow across different AWS services. Proven experience as a Data Engineer with a focus on AWS and Pyspark. Strong proficiency in Pyspark for ETL and data processing tasks. Hands-on experience with AWS data services such as S3, Glue and Redshift. Solid understanding of data modeling, SQL and database design. Skills Aws,Pyspark,Data Engineer Show more Show less
Posted 2 weeks ago
8.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Description Lead Analytics Engineer We are seeking a talented, motivated and self-driven professional to join the HH Digital, Data & Analytics (HHDDA) organization and play an active role in Human Health transformation journey to become the premier “Data First” commercial biopharma organization. As a Lead Analytics Engineer, you will be part of the HHDDA Commercial Data Solutions team, providing technical/data expertise development of analytical data products to enable data science & analytics use cases. In this role, you will create and maintain data assets/domains used in the commercial/marketing analytics space – to develop best-in-class data pipelines and products, working closely with data product owners to translate data product requirements and user stories into development activities throughout all phases of design, planning, execution, testing, deployment and delivery. Your specific responsibilities will include Design and implementation of last-mile data products using the most up-to-date technologies and software / data / DevOps engineering practices Enable data science & analytics teams to drive data modeling and feature engineering activities aligned with business questions and utilizing datasets in an optimal way Develop deep domain expertise and business acumen to ensure that all specificalities and pitfalls of data sources are accounted for Build data products based on automated data models, aligned with use case requirements, and advise data scientists, analysts and visualization developers on how to use these data models Develop analytical data products for reusability, governance and compliance by design Align with organization strategy and implement semantic layer for analytics data products Support data stewards and other engineers in maintaining data catalogs, data quality measures and governance frameworks Education B.Tech / B.S., M.Tech / M.S. or PhD in Engineering, Computer Science, Engineering, Pharmaceuticals, Healthcare, Data Science, Business, or related field Required Experience 8+ years of relevant work experience in the pharmaceutical/life sciences industry, with demonstrated hands-on experience in analyzing, modeling and extracting insights from commercial/marketing analytics datasets (specifically, real-world datasets) High proficiency in SQL, Python and AWS Experience creating / adopting data models to meet requirements from Marketing, Data Science, Visualization stakeholders Experience with including feature engineering Experience with cloud-based (AWS / GCP / Azure) data management platforms and typical storage/compute services (Databricks, Snowflake, Redshift, etc.) Experience with modern data stack tools such as Matillion, Starburst, ThoughtSpot and low-code tools (e.g. Dataiku) Excellent interpersonal and communication skills, with the ability to quickly establish productive working relationships with a variety of stakeholders Experience in analytics use cases of pharmaceutical products and vaccines Experience in market analytics and related use cases Preferred Experience Experience in analytics use cases focused on informing marketing strategies and commercial execution of pharmaceutical products and vaccines Experience with Agile ways of working, leading or working as part of scrum teams Certifications in AWS and/or modern data technologies Knowledge of the commercial/marketing analytics data landscape and key data sources/vendors Experience in building data models for data science and visualization/reporting products, in collaboration with data scientists, report developers and business stakeholders Experience with data visualization technologies (e.g, PowerBI) Our Human Health Division maintains a “patient first, profits later” ideology. The organization is comprised of sales, marketing, market access, digital analytics and commercial professionals who are passionate about their role in bringing our medicines to our customers worldwide. We are proud to be a company that embraces the value of bringing diverse, talented, and committed people together. The fastest way to breakthrough innovation is when diverse ideas come together in an inclusive environment. We encourage our colleagues to respectfully challenge one another’s thinking and approach problems collectively. We are an equal opportunity employer, committed to fostering an inclusive and diverse workplace. Current Employees apply HERE Current Contingent Workers apply HERE Search Firm Representatives Please Read Carefully Merck & Co., Inc., Rahway, NJ, USA, also known as Merck Sharp & Dohme LLC, Rahway, NJ, USA, does not accept unsolicited assistance from search firms for employment opportunities. All CVs / resumes submitted by search firms to any employee at our company without a valid written search agreement in place for this position will be deemed the sole property of our company. No fee will be paid in the event a candidate is hired by our company as a result of an agency referral where no pre-existing agreement is in place. Where agency agreements are in place, introductions are position specific. Please, no phone calls or emails. Employee Status Regular Relocation VISA Sponsorship Travel Requirements Flexible Work Arrangements Hybrid Shift Valid Driving License Hazardous Material(s) Required Skills Business Intelligence (BI), Data Management, Data Modeling, Data Visualization, Measurement Analysis, Stakeholder Relationship Management, Waterfall Model Preferred Skills Job Posting End Date 03/20/2025 A job posting is effective until 11 59 59PM on the day BEFORE the listed job posting end date. Please ensure you apply to a job posting no later than the day BEFORE the job posting end date. Requisition ID R335372 Show more Show less
Posted 2 weeks ago
4.0 - 8.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
We are here with Walk-in drive for Cloud technology professionals. Yes, Redefine your career with us! Join the walk-in drive at TCS Chennai Role : AWS data engineer Experience : 4 to 8 years Walk In Drive Date : 7th June 2025 Registration Time : 09:30 AM – 12:30PM Venue : TCS Chennai Siruseri Office 1/G1, SIPCOT IT Park Navalur, Siruseri, Tamil Nadu 603103 Roles and Responsibilities: · Good hands on experience in Python programming and Pyspark · Data Engineering experience using AWS core services (Lambda, Glue, EMR and RedShift) · Required skill set- SQL, Airflow · Must – Have Experience with snowflake or Hive or AWS S3 · Responsibility – Will be accountable for build/test complex data pipelines (batch and near real time) · Expectation – Readable documentation of all the components being develop · Experience in writing SQLs and stored procedures · Working experience with RDBMS (Oracle / Teradata) Show more Show less
Posted 2 weeks ago
9.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Description Join our dynamic Digital Marketing Data Engineering team at Fanatics, where you'll play a critical role in shaping the big data ecosystem that powers our eCommerce and Digital Marketing platforms. As a full-time Staff Data Engineer, you’ll design, build, and optimize scalable data pipelines and architectures, ensuring seamless data flow and effective collection across cross-functional teams. You will also leverage your backend engineering skills to support API integrations and real-time data exchanges. What We're Looking For BTech/MTech/BS/MS in Computer Science or a related field, or equivalent practical experience. 9+ years of software engineering experience, with a strong track record in building data pipelines and big data solutions. At least 5 years of hands-on experience in Data Engineering roles. Proficiency in Big Data technologies such as: Apache Spark, Apache Iceberg, Amazon Redshift, Athena, EMR Apache Airflow, Apache Kafka AWS services (S3, Lambda) Expertise in at least one programming language: Scala, Java, or Python. Strong background in designing and building data models, integrating data from multiple sources, and developing robust ETL/ELT pipelines. Expert-level SQL programming skills. Proven data analysis and data modeling expertise, with the ability to create data-driven insights and effective visualizations. Familiarity with data quality, lineage, and governance frameworks. Energetic, detail-oriented, and collaborative, with a passion for delivering high-quality solutions. Bonus Points Experience in the e-commerce or retail domain. Knowledge of StarRocks or similar OLAP engines. Experience with Web Services, API integrations, third-party data exchanges, and streaming platforms. A passion for building scalable, high-quality analytics platforms and data products. About Us Fanatics is building a leading global digital sports platform. The company ignites the passions of global sports fans and maximizes the presence and reach for hundreds of sports partners globally by offering innovative products and services across Fanatics Commerce, Fanatics Collectibles, and Fanatics Betting & Gaming, allowing sports fans to Buy, Collect and Bet. Through the Fanatics platform, sports fans can buy licensed fan gear, jerseys, lifestyle and streetwear products, headwear, and hardgoods; collect physical and digital trading cards, sports memorabilia, and other digital assets; and bet as the company builds its Sportsbook and iGaming platform. Fanatics has an established database of over 100 million global sports fans, a global partner network with over 900 sports properties, including major national and international professional sports leagues, teams, players associations, athletes, celebrities, colleges, and college conferences, and over 2,000 retail locations, including its Lids retail business stores. As a market leader with more than 18,000 employees, and hundreds of partners, suppliers, and vendors worldwide, we take responsibility for driving toward more ethical and sustainable practices. We are committed to building an inclusive Fanatics community, reflecting and representing society at every level of the business, including our employees, vendors, partners and fans. Fanatics is also dedicated to making a positive impact in the communities where we all live, work, and play through strategic philanthropic initiatives. About The Team Fanatics Commerce is a leading designer, manufacturer, and seller of licensed fan gear, jerseys, lifestyle and streetwear products, headwear, and hardgoods. It operates a vertically-integrated platform of digital and physical capabilities for leading sports leagues, teams, colleges, and associations globally – as well as its flagship site, www.fanaccs.com. Fanatics Commerce has a broad range of online, sports venue, and vertical apparel partnerships worldwide, including comprehensive partnerships with leading leagues, teams, colleges, and sports organizations across the world—including the NFL, NBA, MLB, NHL, MLS, Formula 1, and Australian Football League (AFL); the Dallas Cowboys, Golden State Warriors, Paris Saint-Germain, Manchester United, Chelsea FC, and Tokyo Giants; the University of Notre Dame, University of Alabama, and University of Texas; the International Olympic Committee (IOC), England Rugby, and the Union of European Football Association (UEFA). Show more Show less
Posted 2 weeks ago
8.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Company Description About Sopra Steria Sopra Steria, a major Tech player in Europe with 50,000 employees in nearly 30 countries, is recognised for its consulting, digital services and solutions. It helps its clients drive their digital transformation and obtain tangible and sustainable benefits. The Group provides end-to-end solutions to make large companies and organisations more competitive by combining in-depth knowledge of a wide range of business sectors and innovative technologies with a collaborative approach. Sopra Steria places people at the heart of everything it does and is committed to putting digital to work for its clients in order to build a positive future for all. In 2024, the Group generated revenues of €5.8 billion. Job Description The world is how we shape it. Excellent communications skills (Verbal and written) at all levels Scrum Master advanced certifications (PSM, SASM, SSM, etc) Working experience on Agile project management tools (Jira, VersionOne, Rally, etc) Good to have skills: ○ Knowledge and experience in working with the SAFe & Agile framework. ○ Experience with continuous delivery, DevOps, and release management. ○ Ability to communicate concisely and accurately to team and to management ○ Knowledge in all or several of the following: In software development (Python, JavaScript, ASP, C#, HTML5...) Data storage Technologies (SQL, . Net, NoSQL (Neo4J, Neptune), S3, AWS (Redshift) ) Web development technologies and frameworks (e.g. Angular, AngularJS, ReactJS…) DevOps methodologies and practices Total Experience Expected: 08-10 years Qualifications Engg Graduate with minimum 6 to 8 years of total experience out of which minimum of 3+ years as Scrum Master Additional Information Comfortable in working from client location & also in EU Timezone. At our organization, we are committed to fighting against all forms of discrimination. We foster a work environment that is inclusive and respectful of all differences. All of our positions are open to people with disabilities. Show more Show less
Posted 2 weeks ago
8.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Summary: We are looking for a Senior Data Engineer with deep expertise in Python , Apache Spark , and Apache Airflow to design, build, and optimize scalable data pipelines and processing frameworks. You will play a key role in managing large-scale data workflows, ensuring data quality, performance, and timely delivery for analytics and machine learning platforms. Key Responsibilities: Design, develop, and maintain data pipelines using Apache Spark (PySpark) and Airflow for batch and near real-time processing. Write efficient, modular, and reusable Python code for ETL jobs, data validation, and transformation tasks. Implement robust data orchestration workflows using Apache Airflow (DAGs, sensors, hooks, etc.). Work with big data technologies on distributed platforms (e.g., Hadoop, AWS EMR, Databricks). Ensure data integrity, security, and governance across various stages of the pipeline. Monitor and optimize pipeline performance; resolve bottlenecks and failures proactively. Collaborate with data scientists, analysts, and other engineers to support data needs. Document architecture, processes, and code to support maintainability and scalability. Participate in code reviews, architecture discussions, and production deployments. Mentor junior engineers and provide guidance on best practices. Required Skills: 8+ years of experience in data engineering or backend development roles. Strong proficiency in Python , including data manipulation (Pandas, NumPy) and writing scalable code. Hands-on experience with Apache Spark (preferably PySpark) for large-scale data processing. Extensive experience with Apache Airflow for workflow orchestration and scheduling. Deep understanding of ETL/ELT patterns , data quality, lineage, and data modeling. Familiarity with cloud platforms (AWS, GCP, or Azure) and related services (S3, BigQuery, Redshift, etc.). Solid experience with SQL , NoSQL, and file formats like Parquet, ORC, and Avro. Proficient with CI/CD pipelines , Git, Docker, and Linux-based development environments. Preferred Qualifications: Experience with data lakehouse architectures (e.g., Delta Lake, Iceberg). Exposure to real-time streaming technologies (e.g., Kafka, Flink, Spark Streaming). Background in machine learning pipelines and MLOps tools (optional). Knowledge of data governance frameworks and compliance standards. Soft Skills: Strong problem-solving and communication skills. Ability to work independently and lead complex projects. Experience working in agile and cross-functional teams. Show more Show less
Posted 2 weeks ago
2.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Greeting from Infosys BPM Ltd, We are hiring for UX with JavaScript, ETL Testing + Python Programming, Automation Testing with Java, Selenium, BDD, Cucumber, ETL DB Testing, ETL Testing Automation skills. Please walk-in for interview on 10th and 11th June 2025 at Pune location Note: Please carry copy of this email to the venue and make sure you register your application before attending the walk-in. Please use below link to apply and register your application. Please mention Candidate ID on top of the Resume *** https://career.infosys.com/jobdesc?jobReferenceCode=PROGEN-HRODIRECT-215162 Interview details Interview Date: 10th and 11th June 2025 Interview Time: 10 AM till 1 PM Interview Venue: Pune:: Hinjewadi Phase 1 Infosys BPM Limited, Plot No. 1, Building B1, Ground floor, Hinjewadi Rajiv Gandhi Infotech Park, Hinjewadi Phase 1, Pune, Maharashtra-411057 Please find below Job Description for your reference: Work from Office*** Min 2 years of experience on project is mandate*** Job Description: UX with JavaScript Technical Design tools (e.g., Photoshop, XD, Figma), strong knowledge of HTML, CSS, JavaScript, and experience with SharePoint customization. Experience with Wireframe, Prototype, intuitive, responsive design, Documentation, Able to Lead the Team Nice to have Understanding of SharePoint Framework (SPFx) and modern SharePoint development. Job Description: ETL Testing + Python Programming Experience in Data Migration Testing (ETL Testing), Manual & Automation with Python Programming. Strong on writing complex SQLs for data migration validations. Work experience with Agile Scrum Methodology Functional Testing- UI Test Automation using Selenium, Java Financial domain experience Good to have AWS knowledge Job Description: Automation Testing with Java, Selenium, BDD, Cucumber Hands on exp in Automation. Java, Selenium, BDD , Cucumber expertise is mandatory. Banking Domian Experience is good. Financial domain experience Automation Talent with TOSCA skills, Payment domain skills is preferable. Job Description: ETL DB Testing Strong experience in ETL testing, data warehousing, and business intelligence. Strong proficiency in SQL. Experience with ETL tools (e.g., Informatica, Talend, AWS Glue, Azure Data Factory). Solid understanding of Data Warehousing concepts, Database Systems and Quality Assurance. Experience with test planning, test case development, and test execution. Experience writing complex SQL Queries and using SQL tools is a must, exposure to various data analytical functions. Familiarity with defect tracking tools (e.g., Jira). Experience with cloud platforms like AWS, Azure, or GCP is a plus. Experience with Python or other scripting languages for test automation is a plus. Experience with data quality tools is a plus. Experience in testing of large datasets. Experience in agile development is must Understanding of Oracle Database and UNIX/VMC systems is a must Job Description: ETL Testing Automation Strong experience in ETL testing and automation. Strong proficiency in SQL and experience with relational databases (e.g., Oracle, MySQL, PostgreSQL, SQL Server). Experience with ETL tools and technologies (e.g., Informatica, Talend, DataStage, Apache Spark). Hands-on experience in developing and maintaining test automation frameworks. Proficiency in at least one programming language (e.g., Python, Java). Experience with test automation tools (e.g., Selenium, PyTest, JUnit). Strong understanding of data warehousing concepts and methodologies. Experience with CI/CD pipelines and version control systems (e.g., Git). Experience with cloud-based data warehouses like Snowflake, Redshift, BigQuery is a plus. Experience with data quality tools is a plus. REGISTRATION PROCESS: The Candidate ID & SHL Test(AMCAT ID) is mandatory to attend the interview. Please follow the below instructions to successfully complete the registration. (Talents without registration & assessment will not be allowed for the Interview). Candidate ID Registration process: STEP 1: Visit: https://career.infosys.com/joblist STEP 2: Click on "Register" and provide the required details and submit. STEP 3: Once submitted, Your Candidate ID(100XXXXXXXX) will be generated. STEP 4: The candidate ID will be shared to the registered Email ID. SHL Test(AMCAT ID) Registration process: This assessment is proctored, and talent gets evaluated on Basic analytics, English Comprehension and writex (email writing). STEP 1: Visit: https://apc01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fautologin-talentcentral.shl.com%2F%3Flink%3Dhttps%3A%2F%2Famcatglobal.aspiringminds.com%2F%3Fdata%3DJTdCJTIybG9naW4lMjIlM0ElN0IlMjJsYW5ndWFnZSUyMiUzQSUyMmVuLVVTJTIyJTJDJTIyaXNBdXRvbG9naW4lMjIlM0ExJTJDJTIycGFydG5lcklkJTIyJTNBJTIyNDE4MjQlMjIlMkMlMjJhdXRoa2V5JTIyJTNBJTIyWm1abFpUazFPV1JsTnpJeU1HVTFObU5qWWpRNU5HWTFOVEU1Wm1JeE16TSUzRCUyMiUyQyUyMnVzZXJuYW1lJTIyJTNBJTIydXNlcm5hbWVfc3E5QmgxSWI5NEVmQkkzN2UlMjIlMkMlMjJwYXNzd29yZCUyMiUzQSUyMnBhc3N3b3JkJTIyJTJDJTIycmV0dXJuVXJsJTIyJTNBJTIyJTIyJTdEJTJDJTIycmVnaW9uJTIyJTNBJTIyVVMlMjIlN0Q%3D%26apn%3Dcom.shl.talentcentral%26ibi%3Dcom.shl.talentcentral%26isi%3D1551117793%26efr%3D1&data=05%7C02%7Comar.muqtar%40infosys.com%7Ca7ffe71a4fe4404f3dac08dca01c0bb3%7C63ce7d592f3e42cda8ccbe764cff5eb6%7C0%7C0%7C638561289526257677%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C0%7C%7C%7C&sdata=s28G3ArC9nR5S7J4j%2FV1ZujEnmYCbysbYke41r5svPw%3D&reserved=0 STEP 2: Click on "Start new test" and follow the instructions to complete the assessment. STEP 3: Once completed, please make a note of the AMCAT ID( Access you Amcat id by clicking 3 dots on top right corner of screen). NOTE: During registration, you'll be asked to provide the following information: Personal Details: Name, Email Address, Mobile Number, PAN number. Availability: Acknowledgement of work schedule preferences (Shifts, Work from Office, Rotational Weekends, 24/7 availability, Transport Boundary) and reason for career change. Employment Details: Current notice period and total annual compensation (CTC) in the format 390000 - 4 LPA (example). Candidate Information: 10-digit candidate ID starting with 100XXXXXXX, Gender, Source (e.g., Vendor name, Naukri/LinkedIn/Found it, or Direct), and Location Interview Mode: Walk-in Attempt all questions in the SHL Assessment app. The assessment is proctored, so choose a quiet environment. Use a headset or Bluetooth headphones for clear communication. A passing score is required for further interview rounds. 5 or above toggles, multi face detected, face not detected, or any malpractice will be considered rejected Once you've finished, submit the assessment and make a note of the AMCAT ID (15 Digit) used for the assessment. Documents to Carry: Please have a note of Candidate ID & AMCAT ID along with registered Email ID. Please do not carry laptops/cameras to the venue as these will not be allowed due to security restrictions. Please carry 2 set of updated Resume/CV (Hard Copy). Please carry original ID proof for security clearance. Please carry individual headphone/Bluetooth for the interview. Pointers to note: Please do not carry laptops/cameras to the venue as these will not be allowed due to security restrictions. Original Government ID card is must for Security Clearance. Regards, Infosys BPM Recruitment team. Show more Show less
Posted 2 weeks ago
10.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
As the hands-on Staff Database Engineer for Clearwater Analytics, you will play a crucial role in designs, develops, and maintains data systems and architectures to collect, store, process, and analyse large volumes of data. You will be building data pipelines, optimize data models, and ensure data quality and security. You will be collaborating with cross-functional teams to meet business objectives and stay updated with emerging technologies and industry best practices. Responsibilities and Duties: Extensive experience with Snowflake, including proficiency in Snow SQL CLI, Snowpipe, creating custom functions, developing Snowflake stored procedures, schema modeling, and performance tuning. In-depth expertise in Snowflake data modeling and ELT processes using Snowflake SQL, as well as implementing complex stored procedures and leveraging Snowflake Task Orchestration for advanced data workflows. Strong background in DBT CLI, DBT Cloud, and GitHub version control, with the ability to design and develop complex SQL processes and ELT pipelines. Take a hands-on approach in designing, developing, and supporting low-latency data pipelines, prioritizing data quality, accuracy, reliability, and efficiency Advance SQL knowledge and hands-on experience in complex query writing using Analytical functions, Troubleshooting, problem-solving, and performance tuning of SQL queries accessing data warehouse as well as Strong knowledge of stored procedures. Collaborate closely with cross-functional teams including, Enterprise Architects, Business Analysts, Product Owners, Solution Architects actively engaging in gathering comprehensive business requirements and translate these requirements into scalable data cloud and Enterprise Data Warehouse (EDW) solutions that precisely align with organizational needs Play a hands-on role in conducting data modeling, ETL (Extract, Transform, Load) development, and data integration processes across all Snowflake environments. Develop and implement comprehensive data governance policies and procedures to fortify the accuracy, security, and compliance of Snowflake data assets across all environments. Capable of independently conceptualizing and developing innovative ETL and reporting solutions, driving them through to successful completion. Create comprehensive documentation for database objects and structures to ensure clarity and consistency. Troubleshoot and resolve production support issues post-deployment, providing effective solutions as needed. Devise and sustain comprehensive data dictionaries, metadata repositories, and documentation to bolster governance and facilitate usage across all Snowflake environments. Remain abreast of the latest industry trends and best practices, actively sharing knowledge and encouraging the team to continuously enhance their skills. Continuously monitor the performance and usage metrics of Snowflake database and Enterprise Data Warehouse (EDW), conducting frequent performance reviews and implementing targeted optimization efforts Skills Required: Familiarity with big data, data warehouse architecture and design principles Strong understanding of database management systems, data modeling techniques, data profiling and data cleansing techniques Expertise in Snowflake architecture, administration, and performance tuning. Experience with Snowflake security configurations and access controls. Knowledge of Snowflake's data sharing and replication features. Proficiency in SQL for data querying, manipulation, and analysis. Experience with ETL (Extract, Transform, Load) tools and processes. Ability to translate business requirements into scalable EDW solutions. Streaming Technologies like AWS Kinesis Qualifications: Bachelor's degree in Computer Science, Information Systems, or a related field. 10 years of hands-on experience in data warehousing, ETL development, and data modeling, with a strong track record of designing and implementing scalable Enterprise Data Warehouse (EDW) solutions. 3+ years of extensive hands-on experience with Snowflake, demonstrating expertise in leveraging its capabilities. Proficiency in SQL and deep knowledge of various database management systems (e.g., Snowflake, Azure, Redshift, Teradata, Oracle, SQL Server). Experience utilizing ETL tools and technologies such as DBT, Informatica ,SSIS, Talend Expertise in data modeling techniques, with a focus on dimensional modeling and star schema design. Familiarity with data governance principles and adeptness in implementing security best practices. Excellent problem-solving and troubleshooting abilities, coupled with a proven track record of diagnosing and resolving complex database issues. Demonstrated leadership and team management skills, with the ability to lead by example and inspire others to strive for excellence. Experience in the Finance industry will be a significant advantage Show more Show less
Posted 2 weeks ago
20.0 years
0 Lacs
India
Remote
Company: AA GLOBUSDIGITAL INDIA PRIVATE LIMITED AA GLOBUSDIGITAL INDIA PRIVATE LIMITED, is a wholly owned subsidiary of Globus Systems Inc US, Globus Systems was founded by industry executives who have been part of the IT services industry for the past 20 years and have seen it evolve and mature. We understand the challenges faced by organizations as they prepare for the future. As a technology delivery company, we are focused on helping organizations lay a foundation for their "Tomorrow-Roadmap". At the heart of any business is the data that drives decisions. Data integrity and security are key drivers for growth. Smart and timely use of technology can help build, streamline and enable data driven decisions that become the backbone of an organization. Business leaders are constantly searching for new solutions, services and partners they can trust with enabling these drivers. Location: PAN India NP: Immediate Joiners required Experience: 6 to 9 years Work Mode: WFH CTC: Market Standard - Expert SQL (6+ years), including stored procedures - Expert git (6+ years) - Expert Python (5+ years) - dbt, including custom packages, macros, and other core functionality - Apache Airflow/Dagster - Postgres/MySQL experience required - Snowflake/Redshift/BigQuery experience required - Knowledge of character set differences and character set conversion techniques Additional Preferred Experience - MS SQL Server and SSIS experience - Knowledgeable about applying A.I. technology to data engineering Responsibilities - Analyze raw data containing hundreds of millions of records to answer questions for business users - Prepare data sets for various production use cases, including production of direct mail, custom reports, and data science purposes - Create testable, high-performance ETL and ELT pipelines using modern technologies Show more Show less
Posted 2 weeks ago
7.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Job Title : Data Architect Location: Noida, India Data Architecture Design: Design, develop, and maintain the enterprise data architecture, including data models, database schemas, and data flow diagrams. Develop a data strategy and roadmap that aligns with business objectives and ensures the scalability of data systems. Architect both transactional (OLTP) and analytical (OLAP) databases, ensuring optimal performance and data consistency. Data Integration & Management: Oversee the integration of disparate data sources into a unified data platform, leveraging ETL/ELT processes and data integration tools. Design and implement data warehousing solutions, data lakes, and/or data marts that enable efficient storage and retrieval of large datasets. Ensure proper data governance, including the definition of data ownership, security, and privacy controls in accordance with compliance standards (GDPR, HIPAA, etc.). Collaboration with Stakeholders: Work closely with business stakeholders, including analysts, developers, and executives, to understand data requirements and ensure that the architecture supports analytics and reporting needs. Collaborate with DevOps and engineering teams to optimize database performance and support large-scale data processing pipelines. Technology Leadership: Guide the selection of data technologies, including databases (SQL/NoSQL), data processing frameworks (Hadoop, Spark), cloud platforms (Azure is a must), and analytics tools. Stay updated on emerging data management technologies, trends, and best practices, and assess their potential application within the organization. Data Quality & Security: Define data quality standards and implement processes to ensure the accuracy, completeness, and consistency of data across all systems. Establish protocols for data security, encryption, and backup/recovery to protect data assets and ensure business continuity. Mentorship & Leadership: Lead and mentor data engineers, data modelers, and other technical staff in best practices for data architecture and management. Provide strategic guidance on data-related projects and initiatives, ensuring that all efforts are aligned with the enterprise data strategy. Required Skills & Experience: Extensive Data Architecture Expertise: Over 7 years of experience in data architecture, data modeling, and database management. Proficiency in designing and implementing relational (SQL) and non-relational (NoSQL) database solutions. Strong experience with data integration tools (Azure Tools are a must + any other third party tools), ETL/ELT processes, and data pipelines. Advanced Knowledge of Data Platforms: Expertise in Azure cloud data platform is a must. Other platforms such as AWS (Redshift, S3), Azure (Data Lake, Synapse), and/or Google Cloud Platform (BigQuery, Dataproc) is a bonus. Experience with big data technologies (Hadoop, Spark) and distributed systems for large-scale data processing. Hands-on experience with data warehousing solutions and BI tools (e.g., Power BI, Tableau, Looker). Data Governance & Compliance: Strong understanding of data governance principles, data lineage, and data stewardship. Knowledge of industry standards and compliance requirements (e.g., GDPR, HIPAA, SOX) and the ability to architect solutions that meet these standards. Technical Leadership: Proven ability to lead data-driven projects, manage stakeholders, and drive data strategies across the enterprise. Strong programming skills in languages such as Python, SQL, R, or Scala. Certification: Azure Certified Solution Architect, Data Engineer, Data Scientist certifications are mandatory. Pre-Sales Responsibilities: Stakeholder Engagement: Work with product stakeholders to analyze functional and non-functional requirements, ensuring alignment with business objectives. Solution Development: Develop end-to-end solutions involving multiple products, ensuring security and performance benchmarks are established, achieved, and maintained. Proof of Concepts (POCs): Develop POCs to demonstrate the feasibility and benefits of proposed solutions. Client Communication: Communicate system requirements and solution architecture to clients and stakeholders, providing technical assistance and guidance throughout the pre-sales process. Technical Presentations: Prepare and deliver technical presentations to prospective clients, demonstrating how proposed solutions meet their needs and requirements. Additional Responsibilities: Stakeholder Collaboration: Engage with stakeholders to understand their requirements and translate them into effective technical solutions. Technology Leadership: Provide technical leadership and guidance to development teams, ensuring the use of best practices and innovative solutions. Integration Management: Oversee the integration of solutions with existing systems and third-party applications, ensuring seamless interoperability and data flow. Performance Optimization: Ensure solutions are optimized for performance, scalability, and security, addressing any technical challenges that arise. Quality Assurance: Establish and enforce quality assurance standards, conducting regular reviews and testing to ensure robustness and reliability. Documentation: Maintain comprehensive documentation of the architecture, design decisions, and technical specifications. Mentoring: Mentor fellow developers and team leads, fostering a collaborative and growth-oriented environment. Qualifications: Education: Bachelor’s or master’s degree in computer science, Information Technology, or a related field. Experience: Minimum of 7 years of experience in data architecture, with a focus on developing scalable and high-performance solutions. Technical Expertise: Proficient in architectural frameworks, cloud computing, database management, and web technologies. Analytical Thinking: Strong problem-solving skills, with the ability to analyze complex requirements and design scalable solutions. Leadership Skills: Demonstrated ability to lead and mentor technical teams, with excellent project management skills. Communication: Excellent verbal and written communication skills, with the ability to convey technical concepts to both technical and non-technical stakeholders. Show more Show less
Posted 2 weeks ago
4.0 years
0 Lacs
Pune, Maharashtra
On-site
- 3+ years of data engineering experience - 4+ years of SQL experience - Experience with data modeling, warehousing and building ETL pipelines Alexa+ is our next-generation assistant powered by generative AI. Alexa+ is more conversational, smarter, personalized, and gets things done. Our goal is make Alexa+ an instantly familiar personal assistant that is always ready to help or entertain on any device. At the core of this vision is 'Alexa AI Developer Tech', a close-knit team that’s dedicated to providing software developers with the tools, primitives, and services they need to easily create engaging customer experiences that expand the wealth of information, products and services available on Alexa+. You will join a growing organization working on top technology using Generative AI and have an enormous opportunity to make an impact on the design, architecture, and implementation of products used every day, by people you know. We’re working hard, having fun, and making history; come join us! Key job responsibilities * Work with a team of product and program managers, engineering leaders, and business leaders to build data architectures and platforms to support business * Design, develop, and operate high-scalable, high-performance, low-cost, and accurate data pipelines in distributed data processing platforms * Recognize and adopt best practices in data processing, reporting, and analysis: data integrity, test design, analysis, validation, and documentation * Keep up to date with big data technologies, evaluate and make decisions around the use of new or existing software products to design the data architecture * Design, build and own all the components of a high-volume data warehouse end to end. * Provide end-to-end data engineering support for project lifecycle execution (design, execution and risk assessment) * Continually improve ongoing reporting and analysis processes, automating or simplifying self-service support for customers * Interface with other technology teams to extract, transform, and load (ETL) data from a wide variety of data sources * Own the functional and nonfunctional scaling of software systems in your ownership area. *Implement big data solutions for distributed computing. About the team Alexa AI Developer Tech is an organization within Alexa on a mission to empower developers to create delightful and engaging experiences by making Alexa more natural, accurate, conversational, and personalized. Experience with AWS technologies like Redshift, S3, AWS Glue, EMR, Kinesis, FireHose, Lambda, and IAM roles and permissions Experience with non-relational databases / data stores (object storage, document or key-value stores, graph databases, column-family databases) Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.
Posted 2 weeks ago
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Description About Amazon.com: Amazon.com strives to be Earth's most customer-centric company where people can find and discover virtually anything they want to buy online. By giving customers more of what they want - low prices, vast selection, and convenience - Amazon.com continues to grow and evolve as a world-class e-commerce platform. Amazon's evolution from Web site to e-commerce partner to development platform is driven by the spirit of innovation that is part of the company's DNA. The world's brightest technology minds come to Amazon.com to research and develop technology that improves the lives of shoppers and sellers around the world. About Team The RBS team is an integral part of Amazon online product lifecycle and buying operations. The team is designed to ensure Amazon remains competitive in the online retail space with the best price, wide selection and good product information. The team’s primary role is to create and enhance retail selection on the worldwide Amazon online catalog. The tasks handled by this group have a direct impact on customer buying decisions and online user experience. Overview Of The Role An candidate will be a self-starter who is passionate about discovering and solving complicated problems, learning complex systems, working with numbers, and organizing and communicating data and reports. You will be detail-oriented and organized, capable of handling multiple projects at once, and capable of dealing with ambiguity and rapidly changing priorities. You will have expertise in process optimizations and systems thinking and will be required to engage directly with multiple internal teams to drive business projects/automation for the RBS team. Candidates must be successful both as individual contributors and in a team environment, and must be customer-centric. Our environment is fast-paced and requires someone who is flexible, detail-oriented, and comfortable working in a deadline-driven work environment. Responsibilities Include Works across team(s) and Ops organization at country, regional and/or cross regional level to drive improvements and enables to implement solutions for customer, cost savings in process workflow, systems configuration and performance metrics. Basic Qualifications Bachelor's degree in Computer Science, Information Technology, or a related field Proficiency in automation using Python Excellent oral and written communication skills Experience with SQL, ETL processes, or data transformation Preferred Qualifications Experience with scripting and automation tools Familiarity with Infrastructure as Code (IaC) tools such as AWS CDK Knowledge of AWS services such as SQS, SNS, CloudWatch and DynamoDB Understanding of DevOps practices, including CI/CD pipelines and monitoring solutions Understanding of cloud services, serverless architecture, and systems integration Key job responsibilities As a Business Intelligence Engineer in the team, you will collaborate closely with business partners, architect, design, implement, and BI projects & Automations. Responsibilities Design, development and ongoing operations of scalable, performant data warehouse (Redshift) tables, data pipelines, reports and dashboards. Development of moderately to highly complex data processing jobs using appropriate technologies (e.g. SQL, Python, Spark, AWS Lambda, etc.) Development of dashboards and reports. Collaborating with stakeholders to understand business domains, requirements, and expectations. Additionally, working with owners of data source systems to understand capabilities and limitations. Deliver minimally to moderately complex data analysis; collaborating as needed with Data Science as complexity increases. Actively manage the timeline and deliverables of projects, anticipate risks and resolve issues. Adopt best practices in reporting and analysis: data integrity, test design, analysis, validation, and documentation. Internal Job Description Retail Business Service, ARTS is a growing team that supports the Retail Efficiency and Paid Services business and tech teams. There is ample growth opportunity in this role for someone who exhibits Ownership and Insist on the Highest Standards, and has strong engineering and operational best practices experience. Basic Qualifications 5+ years of relevant professional experience in business intelligence, analytics, statistics, data engineering, data science or related field. Experience with Data modeling, SQL, ETL, Data Warehousing and Data Lakes. Strong experience with engineering and operations best practices (version control, data quality/testing, monitoring, etc.) Expert-level SQL. Proficiency with one or more general purpose programming languages (e.g. Python, Java, Scala, etc.) Knowledge of AWS products such as Redshift, Quicksight, and Lambda. Excellent verbal/written communication & data presentation skills, including ability to succinctly summarize key findings and effectively communicate with both business and technical teams. Preferred Qualifications Experience with data-specific programming languages/packages such as R or Python Pandas. Experience with AWS solutions such as EC2, DynamoDB, S3, and EMR. Knowledge of machine learning techniques and concepts. Basic Qualifications 3+ years of analyzing and interpreting data with Redshift, Oracle, NoSQL etc. experience Experience with data visualization using Tableau, Quicksight, or similar tools Experience with data modeling, warehousing and building ETL pipelines Experience in Statistical Analysis packages such as R, SAS and Matlab Experience using SQL to pull data from a database or data warehouse and scripting experience (Python) to process data for modeling Preferred Qualifications Experience with AWS solutions such as EC2, DynamoDB, S3, and Redshift Experience in data mining, ETL, etc. and using databases in a business environment with large-scale, complex datasets Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI MAA 15 SEZ Job ID: A2997942 Show more Show less
Posted 2 weeks ago
1.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Description The role is for 1 year term in Amazon Job Description Are you interested in applying your strong quantitative analysis and big data skills to world-changing problems? Are you interested in driving the development of methods, models and systems for strategy planning, transportation and fulfillment network? If so, then this is the job for you. Our team is responsible for creating core analytics tech capabilities, platforms development and data engineering. We develop scalable analytics applications across APAC, MENA and LATAM. We standardize and optimize data sources and visualization efforts across geographies, builds up and maintains the online BI services and data mart. You will work with professional software development managers, data engineers, business intelligence engineers and product managers using rigorous quantitative approaches to ensure high quality data tech products for our customers around the world, including India, Australia, Brazil, Mexico, Singapore and Middle East. Amazon is growing rapidly and because we are driven by faster delivery to customers, a more efficient supply chain network, and lower cost of operations, our main focus is in the development of strategic models and automation tools fed by our massive amounts of available data. You will be responsible for building these models/tools that improve the economics of Amazon’s worldwide fulfillment networks in emerging countries as Amazon increases the speed and decreases the cost to deliver products to customers. You will identify and evaluate opportunities to reduce variable costs by improving fulfillment center processes, transportation operations and scheduling, and the execution to operational plans. Major Responsibilities Include Translating business questions and concerns into specific analytical questions that can be answered with available data using BI tools; produce the required data when it is not available. Writing SQL queries and automation scripts Ensure data quality throughout all stages of acquisition and processing, including such areas as data sourcing/collection, ground truth generation, normalization, transformation, cross-lingual alignment/mapping, etc. Communicate proposals and results in a clear manner backed by data and coupled with actionable conclusions to drive business decisions. Collaborate with colleagues from multidisciplinary science, engineering and business backgrounds. Develop efficient data querying and modeling infrastructure. Manage your own process. Prioritize and execute on high impact projects, triage external requests, and ensure to deliver projects in time. Utilizing code (SQL, Python, R, Scala, etc.) for analyzing data and building data marts Basic Qualifications 3+ years of analyzing and interpreting data with Redshift, Oracle, NoSQL etc. experience Experience with data visualization using Tableau, Quicksight, or similar tools Experience with data modeling, warehousing and building ETL pipelines Experience in Statistical Analysis packages such as R, SAS and Matlab Experience using SQL to pull data from a database or data warehouse and scripting experience (Python) to process data for modeling Preferred Qualifications Experience with AWS solutions such as EC2, DynamoDB, S3, and Redshift Experience in data mining, ETL, etc. and using databases in a business environment with large-scale, complex datasets Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ASSPL - Karnataka Job ID: A2997231 Show more Show less
Posted 2 weeks ago
4.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
The Data Engineer is accountable for developing high quality data products to support the Bank’s regulatory requirements and data driven decision making. A Data Engineer will serve as an example to other team members, work closely with customers, and remove or escalate roadblocks. By applying their knowledge of data architecture standards, data warehousing, data structures, and business intelligence they will contribute to business outcomes on an agile team. Responsibilities Developing and supporting scalable, extensible, and highly available data solutions Deliver on critical business priorities while ensuring alignment with the wider architectural vision Identify and help address potential risks in the data supply chain Follow and contribute to technical standards Design and develop analytical data models Required Qualifications & Work Experience First Class Degree in Engineering/Technology (4-year graduate course) 2 - 4 years’ experience implementing data-intensive solutions using agile methodologies Experience of relational databases and using SQL for data querying, transformation and manipulation Experience of modelling data for analytical consumers Ability to automate and streamline the build, test and deployment of data pipelines Experience in cloud native technologies and patterns A passion for learning new technologies, and a desire for personal growth, through self-study, formal classes, or on-the-job training Excellent communication and problem-solving skills T echnical Skills (Must Have) ETL: Hands on experience of building data pipelines. Proficiency in at least one of the data integration platforms such as Ab Initio, Apache Spark, Talend and Informatica Big Data: Exposure to ‘big data’ platforms such as Hadoop, Hive or Snowflake for data storage and processing Data Warehousing & Database Management: Understanding of Data Warehousing concepts, Relational (Oracle, MSSQL, MySQL) and NoSQL (MongoDB, DynamoDB) database design Data Modeling & Design: Good exposure to data modeling techniques; design, optimization and maintenance of data models and data structures Languages: Proficient in one or more programming languages commonly used in data engineering such as Python, Java or Scala DevOps: Exposure to concepts and enablers - CI/CD platforms, version control, automated quality control management Technical Skills (Valuable) Ab Initio: Experience developing Co>Op graphs; ability to tune for performance. Demonstrable knowledge across full suite of Ab Initio toolsets e.g., GDE, Express>IT, Data Profiler and Conduct>IT, Control>Center, Continuous>Flows Cloud: Good exposure to public cloud data platforms such as S3, Snowflake, Redshift, Databricks, BigQuery, etc. Demonstratable understanding of underlying architectures and trade-offs Data Quality & Controls: Exposure to data validation, cleansing, enrichment and data controls Containerization: Fair understanding of containerization platforms like Docker, Kubernetes File Formats: Exposure in working on Event/File/Table Formats such as Avro, Parquet, Protobuf, Iceberg, Delta Others: Basics of Job scheduler like Autosys. Basics of Entitlement management Certification on any of the above topics would be an advantage. ------------------------------------------------------ Job Family Group: Technology ------------------------------------------------------ Job Family: Digital Software Engineering ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster. Show more Show less
Posted 2 weeks ago
5.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Our Company Changing the world through digital experiences is what Adobe’s all about. We give everyone—from emerging artists to global brands—everything they need to design and deliver exceptional digital experiences! We’re passionate about empowering people to create beautiful and powerful images, videos, and apps, and transform how companies interact with customers across every screen. We’re on a mission to hire the very best and are committed to creating exceptional employee experiences where everyone is respected and has access to equal opportunity. We realize that new ideas can come from everywhere in the organization, and we know the next big idea could be yours! Our Company Changing the world through digital experiences is what Adobe’s all about. We give everyone—from emerging artists to global brands—everything they need to design and deliver exceptional digital experiences! We’re passionate about empowering people to create beautiful and powerful images, videos, and apps, and transform how companies interact with customers across every screen. We’re on a mission to hire the very best and are committed to creating exceptional employee experiences where everyone is respected and has access to equal opportunity. We realize that new ideas can come from everywhere in the organization, and we know the next big idea could be yours! The Opportunity The Customer Insights Analytics and Data Growth & Platform teams are responsible for building analytics and data solutions to accelerate growth for Adobe's ~$18B annual recurring revenue Digital Media business. We are looking for a Senior Business Intelligence Analyst with a strong track record of taking end-to-end ownership and delivering scalable solutions to enable our Mobile Analytics team and partners in a fast-paced and multifaceted business environment! This person will be a creative problem solver who is passionate about building innovative solution plus resourceful, collaborative, and insightful with the ability to work independently under time constraints. We need an exceptional BI analyst to uncover deep insights, enabling business users to make faster, data-driven decisions. In this role, you will demonstrate your critical thinking and technical expertise across multiple domains, driving the growth of our mobile user base. The position entails delivering vital insights to support GTM teams in decision-making and tracking business performance using a variety of key metrics. What you'll Do Analyze and interpret complex mobile data to deliver actionable insights through weekly/biweekly RTB (Run the Business) operations and deep-dive analyses. Develop and maintain BI dashboards and reports to track critical metrics for mobile analytics. Collaborate with multi-functional teams to understand business requirements and translate them into analytical solutions. Conduct deep-dive analyses to identify trends, patterns, and opportunities for optimization. Design and implement data models and algorithms to improve mobile user experience and engagement. Present findings and recommendations to partners in a clear and concise manner. Stay informed about industry trends and standard methodologies in mobile analytics and business intelligence. What you need to succeed This position requires a Bachelor's Degree in Computer Science or a related technical field. 5+ years of experience in Business Operations, Strategy, or Analytics at a top management consulting firm, SaaS organization or financial services institution. Proficiency in data extraction, transformation and analysis tools such as SQL, Python/PySpark, R as well as data visualization tools like Tableau or Power BI. Must have strong hands-on working experience of more than one Database Platform (Ex: Databricks, MySQL, Snowflake, PostgreSQL, Redshift, Azure SQL Warehouse etc.) with leading commercial cloud platforms like Azure, AWS and/or GCP. Experience in mobile analytics domain with a clear understanding of mobile app metrics and user behavior analysis. Strong written and verbal communication, interpersonal and presentation skills, with ability to engage and effectively present complex data insights to technical and non-technical audiences. Outstanding problem-solving and analytical skills including talent for conducting research, analyzing data, developing hypotheses, and synthesizing recommendations. Partner with multi-functional teams across various regions, including partner teams, to identify their analytical needs and deliver actionable insights. Experience with A/B testing and experimentation methodologies. Familiarity with SaaS business models and mobile app ecosystems is a plus. High degree of intellectual curiosity and ability to absorb new concepts quickly. Self-starter with an ability to work through ambiguity and prioritize multiple projects and partners. Experience working with a diverse distributed team across a complex, matrixed environment. Strong alignment with Adobe's Core Values: Raise the bar, Create the future, Own the outcome, Be genuine Opportunity and affirmative action employer. We do not discriminate based on gender, race or color, ethnicity or national origin, age, disability, religion, sexual orientation, gender identity or expression, veteran status, or any other applicable characteristics protected by law. Learn more. Adobe aims to make Adobe.com accessible to any and all users. If you have a disability or special need that requires accommodation to navigate our website or complete the application process, email accommodations@adobe.com or call (408) 536-3015. Adobe values a free and open marketplace for all employees and has policies in place to ensure that we do not enter into illegal agreements with other companies to not recruit or hire each other’s employees. Adobe is proud to be an Equal Employment Opportunity employer. We do not discriminate based on gender, race or color, ethnicity or national origin, age, disability, religion, sexual orientation, gender identity or expression, veteran status, or any other applicable characteristics protected by law. Learn more. Adobe aims to make Adobe.com accessible to any and all users. If you have a disability or special need that requires accommodation to navigate our website or complete the application process, email accommodations@adobe.com or call (408) 536-3015. Show more Show less
Posted 2 weeks ago
8.0 years
0 Lacs
Itanagar, Arunachal Pradesh, India
Remote
Position: Senior Database Administrator Position Overview As a Senior Database Administrator (DBA) at Intelex Technologies, you will play a critical role in managing and optimizing our MS SQL Server, Oracle, and PostgreSQL database environments. You will be responsible for the design, implementation, performance tuning, high availability, and security of our database infrastructure across cloud and on-premises deployments. Working within the DevOps & DataOps team , you will collaborate with developers, cloud engineers, and SREs to ensure seamless database operations supporting our mission-critical applications. Responsibilities And Deliverables Database Administration & Architecture Design, implement, and optimize databases across MS SQL Server, Oracle, and PostgreSQL environments. Participate in architecture/design reviews, ensuring database structures align with application needs and performance goals. Define and maintain best practices for schema design, indexing strategies, and query optimization. Performance Tuning & Scalability Conduct proactive query tuning, execution plan analysis, and indexing strategies to optimize database performance. Monitor, troubleshoot, and resolve performance bottlenecks across MS SQL Server, Oracle, and PostgreSQL. Implement partitioning, replication, and caching to improve data access and efficiency. High Availability, Replication & Disaster Recovery Design and implement HA/DR solutions for all supported databases, including MS Clustering, Oracle Data Guard, PostgreSQL Streaming Replication, and Always On Availability Groups. Perform capacity planning and ensure proper backup and recovery strategies are in place. Automate and test failover and recovery processes to minimize downtime. Security & Compliance Implement role-based access control (RBAC), encryption, auditing, and compliance policies across all database environments. Ensure adherence to SOC 2, ISO 27001, GDPR, and HIPAA security standards. Collaborate with security teams to identify and mitigate vulnerabilities. DevOps, CI/CD, & Automation Integrate database changes into CI/CD pipelines, ensuring automated schema migrations and rollbacks. Use Terraform or other IaC tools for database provisioning and configuration management. Automate routine maintenance tasks, monitoring, and alerting using New Relic and PagerDuty or similar. Cloud & Data Technologies Manage cloud-based database solutions such as Azure SQL, Amazon RDS, Aurora, Oracle Cloud, and PostgreSQL on AWS/Azure. Work with NoSQL solutions like MongoDB when needed. Support data warehousing and analytics solutions (e.g., Snowflake, Redshift, SSAS). Incident Response & On-Call Support Provide on-call support for database-related production incidents on a rotational basis. Conduct root cause analysis and implement long-term fixes for database-related issues. Organizational Alignment This is a highly collaborative role requiring close interactions with: DevOps & SRE teams to improve database scalability and monitoring. Developers to ensure efficient database designs and optimize queries. Cloud & Security teams to maintain compliance and security best practices. Qualifications & Skills Required 8+ years of experience managing MS SQL Server, Oracle, and PostgreSQL in enterprise environments. Expertise in database performance tuning, query optimization, and execution plan analysis. Strong experience with replication, clustering, and high-availability configurations. Hands-on experience with cloud databases in AWS or Azure (RDS, Azure SQL, Oracle Cloud, etc.). Solid experience with backup strategies, disaster recovery planning, and failover testing. Proficiency in T-SQL, PL/SQL, and PostgreSQL SQL scripting. Experience automating database tasks using PowerShell, Python, or Bash. Preferred Experience with containerized database deployments like Docker, or K8s. Knowledge of Kafka, AMQP, or event-driven architectures for handling high-volume transactions. Familiarity with Oracle Data Guard, GoldenGate, PostgreSQL Logical Replication, and Always On Availability Groups. Experience working in DevOps/SRE environments with CI/CD for database deployments. Exposure to big data technologies and analytical platforms. Certifications such as Oracle DBA Certified Professional, Microsoft Certified: Azure Database Administrator Associate, or AWS Certified Database – Specialty. Education & Other Requirements Bachelor's or Master's degree in Computer Science, Data Engineering, or equivalent experience. This role requires a satisfactory Criminal Background Check and Public Safety Verification. Why Join Intelex Technologies? Work with cutting-edge database technologies in a fast-paced, DevOps-driven environment. Make an impact by supporting critical EHS applications that improve workplace safety. Flexible remote work options and opportunities for professional growth. Collaborate with top-tier cloud, DevOps, and security experts to drive innovation. Fortive Corporation Overview Fortive’s essential technology makes the world stronger, safer, and smarter. We accelerate transformation across a broad range of applications including environmental, health and safety compliance, industrial condition monitoring, next-generation product design, and healthcare safety solutions. We are a global industrial technology innovator with a startup spirit. Our forward-looking companies lead the way in software-powered workflow solutions, data-driven intelligence, AI-powered automation, and other disruptive technologies. We’re a force for progress, working alongside our customers and partners to solve challenges on a global scale, from workplace safety in the most demanding conditions to groundbreaking sustainability solutions. We are a diverse team 17,000 strong, united by a dynamic, inclusive culture and energized by limitless learning and growth. We use the proven Fortive Business System (FBS) to accelerate our positive impact. At Fortive, we believe in you. We believe in your potential—your ability to learn, grow, and make a difference. At Fortive, we believe in us. We believe in the power of people working together to solve problems no one could solve alone. At Fortive, we believe in growth. We’re honest about what’s working and what isn’t, and we never stop improving and innovating. Fortive: For you, for us, for growth. About Intelex Since 1992, Intelex Technologies, ULC. is a global leader in the development and support of software solutions for Environment, Health, Safety and Quality (EHSQ) programs. Our scalable, web-based software provides clients with unprecedented flexibility in managing, tracking and reporting on essential corporate information. Intelex software easily integrates with common ERP systems like SAP and PeopleSoft creating a seamless solution for enterprise-wide information management. Intelex’s friendly, knowledgeable staff ensures our almost 1400 clients and over 3.5 million users from companies across the globe get the most out of our groundbreaking, user-friendly software solutions. Visit www.intelex.com to learn more. We Are an Equal Opportunity Employer. Fortive Corporation and all Fortive Companies are proud to be equal opportunity employers. We value and encourage diversity and solicit applications from all qualified applicants without regard to race, color, national origin, religion, sex, age, marital status, disability, veteran status, sexual orientation, gender identity or expression, or other characteristics protected by law. Fortive and all Fortive Companies are also committed to providing reasonable accommodations for applicants with disabilities. Individuals who need a reasonable accommodation because of a disability for any part of the employment application process, please contact us at applyassistance@fortive.com. Since 1992, Intelex Technologies, ULC. is a global leader in the development and support of software solutions for Environment, Health, Safety and Quality (EHSQ) programs. Our scalable, web-based software provides clients with unprecedented flexibility in managing, tracking and reporting on essential corporate information. Intelex software easily integrates with common ERP systems like SAP and PeopleSoft creating a seamless solution for enterprise-wide information management. Intelex’s friendly, knowledgeable staff ensures our almost 1400 clients and over 3.5 million users from companies across the globe get the most out of our groundbreaking, user-friendly software solutions. Visit www.intelex.com to learn more. We Are an Equal Opportunity Employer. Fortive Corporation and all Fortive Companies are proud to be equal opportunity employers. We value and encourage diversity and solicit applications from all qualified applicants without regard to race, color, national origin, religion, sex, age, marital status, disability, veteran status, sexual orientation, gender identity or expression, or other characteristics protected by law. Fortive and all Fortive Companies are also committed to providing reasonable accommodations for applicants with disabilities. Individuals who need a reasonable accommodation because of a disability for any part of the employment application process, please contact us at applyassistance@fortive.com. Show more Show less
Posted 2 weeks ago
5.0 years
0 Lacs
Gurgaon, Haryana, India
On-site
Job Title: Senior Data Engineer Job Summary We are looking for an experienced Senior Data Engineer with 5+ years of hands-on experience in cloud data engineering platforms, specifically AWS, Databricks, and Azure. The ideal candidate will play a critical role in designing, building, and maintaining scalable data pipelines and infrastructure to support our analytics and business intelligence initiatives. Key Responsibilities Design, develop, and optimize scalable data pipelines using AWS services (e.g., S3, Glue, Redshift, Lambda). Build and maintain ETL/ELT workflows leveraging Databricks and Apache Spark for processing large datasets. Work extensively with Azure data services such as Azure Data Lake, Azure Synapse, Azure Data Factory, and Azure Databricks. Collaborate with data scientists, analysts, and stakeholders to understand data requirements and deliver high-quality data solutions. Ensure data quality, reliability, and security across multiple cloud platforms. Monitor and troubleshoot data pipelines, implement performance tuning, and optimize resource usage. Implement best practices for data governance, metadata management, and documentation. Stay current with emerging cloud data technologies and industry trends to recommend improvements. Required Qualifications 5+ years of experience in data engineering with strong expertise in AWS, Databricks, and Azure cloud platforms. Hands-on experience with big data processing frameworks, particularly Apache Spark. Proficient in building complex ETL/ELT pipelines and managing data workflows. Strong programming skills in Python, Scala, or Java. Experience working with structured and unstructured data in cloud storage solutions. Knowledge of SQL and experience with relational and NoSQL databases. Familiarity with CI/CD pipelines and DevOps practices in cloud environments. Strong analytical and problem-solving skills with an ability to work independently and in teams. Preferred Skills Experience with containerization and orchestration tools (Docker, Kubernetes). Familiarity with machine learning pipelines and tools. Knowledge of data modeling, data warehousing, and analytics architecture. Show more Show less
Posted 2 weeks ago
5.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibilities Support the full data engineering lifecycle including research, proof of concepts, design, development, testing, deployment, and maintenance of data management solutions Utilize knowledge of various data management technologies to drive data engineering projects Working with Operations and Product Development staff to support applications/processes to facilitate the effective and efficient implementation/migration of new clients' healthcare data through the Optum Impact Product Suite Lead data acquisition efforts to gather data from various structured or semi-structured source systems of record to hydrate client data warehouse and power analytics across numerous health care domains Leverage combination of ETL/ELT methodologies to pull complex relational and dimensional data to support loading DataMart’s and reporting aggregates Eliminate unwarranted complexity and unneeded interdependencies Detect data quality issues, identify root causes, implement fixes, and manage data audits to mitigate data challenges Implement, modify, and maintain data integration efforts that improve data efficiency, reliability, and value Leverage and facilitate the evolution of best practices for data acquisition, transformation, storage, and aggregation that solve current challenges and reduce the risk of future challenges Effectively create data transformations that address business requirements and other constraints Partner with the broader analytics organization to make recommendations for changes to data systems and the architecture of data platforms Prepare high level design documents and detailed technical design documents with best practices to enable efficient data ingestion, transformation and data movement Leverage DevOps tools to enable code versioning and code deployment Leverage data pipeline monitoring tools to detect data integrity issues before they result into user visible outages or data quality issues Leverage processes and diagnostics tools to troubleshoot, maintain and optimize solutions and respond to customer and production issues Continuously support technical debt reduction, process transformation, and overall optimization Leverage and contribute to the evolution of standards for high quality documentation of data definitions, transformations, and processes to ensure data transparency, governance, and security Ensure that all solutions meet the business needs and requirements for security, scalability, and reliability Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications Bachelor’s Degree (preferably in information technology, engineering, math, computer science, analytics, engineering or other related field) 5+ years of combined experience in data engineering, ingestion, normalization, transformation, aggregation, structuring, and storage 5+ years of combined experience working with industry standard relational, dimensional or non-relational data storage systems 5+ years of experience in designing ETL/ELT solutions using tools like Informatica, DataStage, SSIS , PL/SQL, T-SQL, etc. 5+ years of experience in managing data assets using SQL, Python, Scala, VB.NET or other similar querying/coding language 3+ years of experience working with healthcare data or data to support healthcare organizations Preferred Qualifications 5+ years of experience in creating Source to Target Mappings and ETL design for integration of new/modified data streams into the data warehouse/data marts Experience in Unix or Powershell or other batch scripting languages Experience supporting data pipelines that power analytical content within common reporting and business intelligence platforms (e.g. Power BI, Qlik, Tableau, MicroStrategy, etc.) Experience supporting analytical capabilities inclusive of reporting, dashboards, extracts, BI tools, analytical web applications and other similar products Experience contributing to cross-functional efforts with proven success in creating healthcare insights Experience and credibility interacting with analytics and technology leadership teams Depth of experience and proven track record creating and maintaining sophisticated data frameworks for healthcare organizations Exposure to Azure, AWS, or google cloud ecosystems Exposure to Amazon Redshift, Amazon S3, Hadoop HDFS, Azure Blob, or similar big data storage and management components Demonstrated desire to continuously learn and seek new options and approaches to business challenges Willingness to leverage best practices, share knowledge, and improve the collective work of the team Demonstrated ability to effectively communicate concepts verbally and in writing Demonstrated awareness of when to appropriately escalate issues/risks Demonstrated excellent communication skills, both written and verbal At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone-of every race, gender, sexuality, age, location and income-deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission. Show more Show less
Posted 2 weeks ago
8.0 - 13.0 years
10 - 20 Lacs
Bengaluru
Remote
Lead AWS Glue Data Engineer Job Location : Hyderabad / Bangalore / Chennai / Noida/ Gurgaon / Pune / Indore / Mumbai/ Kolkata We are seeking a skilled Lead AWS Data Engineer with 8+ years of strong programming and SQL skills to join our team. The ideal candidate will have hands-on experience with AWS Data Analytics services and a basic understanding of general AWS services. Additionally, prior experience with Oracle and Postgres databases and secondary skills in Python and Azure DevOps will be an advantage. Key Responsibilities: Design, develop, and optimize data pipelines using AWS Data Analytics services such as RDS, DMS, Glue, Lambda, Redshift, and Athena . Implement data migration and transformation processes using AWS DMS and Glue . Work with SQL (Oracle & Postgres) to query, manipulate, and analyse large datasets. Develop and maintain ETL/ELT workflows for data ingestion and transformation. Utilize AWS services like S3, IAM, CloudWatch, and VPC to ensure secure and efficient data operations. Write clean and efficient Python scripts for automation and data processing. Collaborate with DevOps teams using Azure DevOps for CI/CD pipelines and infrastructure management. Monitor and troubleshoot data workflows to ensure high availability and performance. Preferred Qualifications: AWS certifications in Data Analytics, Solutions Architect, or DevOps. Experience with data warehousing concepts and data lake implementations. Hands-on experience with Infrastructure as Code (IaC) tools like Terraform or CloudFormation.
Posted 2 weeks ago
3.0 - 8.0 years
5 - 10 Lacs
Hyderabad
Work from Office
Key Responsibilities Administer and maintain AWS environments supporting data pipelines, including S3, EMR, Athena, Glue, Lambda, CloudFormation, and Redshift. Cost Analysis use AWS Cost Explorer to analyze services and usages, create dashboards to alert outliers on usage and cost Performance and Audit use AWS Cloud Trail and Cloud Watch to monitory system performance and usage Monitor, troubleshoot, and optimize infrastructure performance and availability. Provision and manage cloud resources using Infrastructure as Code (IaC) tools (e.g., AWS CloudFormation, Terraform). Collaborate with data engineers working in PySpark, Hive, Kafka, and Python to ensure infrastructure alignment with processing needs. Support code integration with GIT repositories Implement and maintain security policies, IAM roles, and access controls. Participate in incident response and support resolution of operational issues, including on-call responsibilities. Manage backup, recovery, and disaster recovery processes for AWS-hosted data and services. Interface directly with client teams to gather requirements, provide updates, and resolve issues professionally. Create and maintain technical documentation and operational runbooks Required Qualifications 3+ years of hands-on administration experience managing AWS infrastructure, particularly in support of data-centric workloads. Strong knowledge of AWS services including but not limited to S3, EMR, Glue, Lambda, Redshift, and Athena. Experience with infrastructure automation and configuration management tools (e.g., CloudFormation, Terraform, AWS CLI). Proficiency in Linux administration and shell scripting, including Installing and managing software on Linux servers Familiarity with Kafka, Hive, and distributed processing frameworks such as Apache Spark. Ability to manage and troubleshoot IAM configurations, networking, and cloud security best practices. Demonstrated experience in monitoring tools (e.g., CloudWatch, Prometheus, Grafana) and alerting systems. Excellent verbal and written communication skills. Comfortable working with cross-functional teams and engaging directly with clients. Preferred Qualifications AWS Certification (e.g., Solutions Architect Associate, SysOps Administrator) Experience supporting data science or analytics teams Familiarity with DevOps practices and CI/CD pipelines Familiarity with Apache Icebergbased data pipelines
Posted 2 weeks ago
5.0 - 8.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Title: Company Overview Incedo is a US-based consulting, data science and technology services firm with over 3000 people helping clients from our six offices across US, Mexico and India. We help our clients achieve competitive advantage through end-to-end digital transformation. Our uniqueness lies in bringing together strong engineering, data science, and design capabilities coupled with deep domain understanding. We combine services and products to maximize business impact for our clients in telecom, Banking, Wealth Management, product engineering and life science & healthcare industries. Working at Incedo will provide you an opportunity to work with industry leading client organizations, deep technology and domain experts, and global teams. Incedo University, our learning platform, provides ample learning opportunities starting with a structured onboarding program and carrying throughout various stages of your career. A variety of fun activities is also an integral part of our friendly work environment. Our flexible career paths allow you to grow into a program manager, a technical architect or a domain expert based on your skills and interests. Our Mission is to enable our clients to maximize business impact from technology by Harnessing the transformational impact of emerging technologies Bridging the gap between business and technology Role Description: Cloud Data Platform (AWS) at Incedo, you will be responsible for designing, deploying and maintaining cloud-based data platforms on the AWS platform. You will work with data engineers, data scientists and business analysts to understand business requirements and design scalable, reliable and cost-effective solutions that meet those requirements. Roles & Responsibilities: • Designing, developing and deploying cloud-based data platforms using Amazon Web Services (AWS) • Integrating and processing large amounts of structured and unstructured data from various sources • Implementing and optimizing ETL processes and data pipelines • Developing and maintaining security and access controls • Collaborating with other teams to ensure the consistency and integrity of data • Troubleshooting and resolving data platform issues Technical Skills Skills Requirements: • In-depth knowledge of AWS services and tools such as AWS Glue, AWS Redshift, and AWS Lambda • Experience in building scalable and reliable data pipelines using AWS services, Apache Spark, and related big data technologies • Familiarity with cloud-based infrastructure and deployment, specifically on AWS • Strong knowledge of programming languages such as Python, Java, and SQL • Must have excellent communication skills and be able to communicate complex technical information tonon-technical stakeholders in a clear and concise manner. • Must understand the company's long-term vision and align with it. • Should be open to new ideas and be willing to learn and develop new skills. • Should also be able to work well under pressure and manage multiple tasks and priorities. Nice-to-have skills Qualifications Qualifications • 5-8 years of work experience in relevant field • B.Tech/B.E/M.Tech or MCA degree from a reputed university. Computer science background is preferred Company Value We value diversity at Incedo. We do not discriminate based on race, religion, color, national origin, gender,sexual orientation, age, marital status, veteran status, or disability status. Show more Show less
Posted 2 weeks ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About QualiZeal: QualiZeal is North America's Fastest-growing Independent Digital Quality Engineering Services company with a global headcount of 800+ Software Quality and Development Engineers. Trusted by 40+ global enterprises, QualiZeal has delivered over 200 successful projects—in the areas of Quality Engineering, Digital Engineering, Advisory and Transformation, and Emerging Technology Testing—earning an industry-leading client NPS score of 85. Founded on principles of delivery excellence, service orientation and customer delight, we are a fast-paced, culture-driven organization on a high-growth trajectory. Recognitions: · Great Place to Work Certified (2023,2024) · Major Contender in Quality Engineering by Everest Group (2023) · Economic Times Excellence Award (2023) · The Global Choice Award (2022) · NASSCOM Member · ISO 13485:2016 and ISO 9001:2015 · Glassdoor Rating: 4.7 Position :Data Architect Location: Hyderabad, India Job Type: Full Time Role & responsibilities: Design, develop, test, deploy and troubleshoot SQL scripts, Stored Procedures that implement complex ETL processes. Must have experience in Query Optimization. Must be able to understand complex business logic, fine tune long-running queries Must be expert at reverse engineering and extracting business requirements and rules from the code. Scheduling and monitoring the ETL jobs using SSIS. Build and review functional and business specifications Expert understanding of PostgreSQL, Stored Procedures, Views, and Functions. Provide the estimates for the ETL work Database/Data Warehouse performance tuning, monitoring and troubleshooting expertise. Expertise in query construction, including writing and optimizing complex SQL queries that contain sub queries, joins, and derived tables Troubleshooting and fixing the code Unit testing the code and documentation of results Must be expert at providing non-functional requirements. Help creating test data and provide support to QA process. Work with gate keeper on promoting the data stage jobs from QA to Production. Build, patch and data fix in production environment. Ensure very high availability to scale of 99.999%. Establish coding practices, best practices, and SoPs. Participate in code reviews and enforce relevant process. Strong analytical and thinking capabilities, good communication skills to conduct requirement gathering sessions/interview customers Ability to perform independent research to understand the product requirements and customer needs Communicates effectively with the project teams and other stakeholders. Translate technical details to non-technical audience. Expert at creating architectural artifacts for Data Warehouse. Team, effort management. Ability to set expectations for the client and the team. Ensure all deliverables are delivered in time at highest quality. Technical Skills: ETL-SSIS SQL Stored Procedure, Functions, Triggers etc Query Optimization Server monitoring ETL:-AWS-Glue DBMS:-AWS Aurora MySQL, AWS Redshift, PostgreSQL Cloud Services:-AWS cloud services, including EC2, RDS, S3, and IAM. Data Skills:- SQL Performance tuning. Coding:- knowledge of programming language like C#, Python or Java. To oversee dev resources. Team and people management. Agile scrum practices Great learning attitude Eagerness to take ownership A global mindset Excellent communication skills. Show more Show less
Posted 2 weeks ago
20.0 years
0 Lacs
Pune, Maharashtra, India
Remote
Corporate Overview Powering Performance Marketplaces in Digital Media QuinStreet is a pioneer in powering decentralized online marketplaces that match searchers and “research and compare” consumers with brands . We run these virtual- and private-label marketplaces in one of the nation’s largest media networks. Our industry leading segmentation and AI-driven matching technologies help consumers find better solutions and brands faster. They allow brands to target and reach in-market customer prospects with pinpoint segment-by-segment accuracy, and to pay only for performance results. Our campaign-results-driven matching decision engines and optimization algorithms are built from over 20 years and billions of dollars of online media experience. We believe in: The direct measurability of digital media. Performance marketing. (We pioneered it.) The advantages of technology. We bring all this together to deliver truly great results for consumers and brands in the world’s biggest channel. Job Title: Production Support Engineer - RevOps Team Experience: 2-4 Years Location: Remote / On-site (depending on the role and region) Job Description: We are seeking a motivated and skilled Production Support Engineer to join our RevOps team. In this role, you will ensure the continuous uptime of our application systems by monitoring alerts, managing production incidents, and supporting product operations. You will collaborate with cross-functional teams to implement effective alerting strategies, manage post-mortems, and drive improvements to prevent future issues. As part of the RevOps team, you will be responsible for owning the post-mortem process for outages and ensuring thorough documentation is maintained. Additionally, you'll participate in on-call support for addressing issues related to applications or data pipelines. Key Responsibilities: Alerting and Monitoring : Use alerting tools to monitor application performance. Build and configure alerts to proactively detect and address production outages. Cross-functional Collaboration : Work closely with the product, marketplace, and business teams (client and media) to ensure comprehensive alerting coverage and swift resolution of production outages. Incident Management : Respond to and resolve escalated production issues, ensuring minimal disruption to business operations and customer experience. Post-Mortem Ownership : Own the post-mortem process for production outages. Analyze incidents, document findings, and collaborate with relevant teams to implement corrective actions and avoid future similar issues. Documentation : Maintain thorough documentation of production incidents, actions taken, and lessons learned to improve incident response and application reliability. Required Skills: SQL : Solid experience in SQL for querying databases, data manipulation, and troubleshooting. Redshift : Familiarity with Amazon Redshift for data warehousing and querying large datasets. MS SQL Server : Experience working with MS SQL Server for database management and querying. Python : Familiarity with Python scripting for automation and monitoring is a plus. Tableau : Proficiency in Tableau for data visualization and reporting. Excel : Strong skills in Excel, including advanced formulas and data analysis techniques. Desired Skills: Alerting Tools : Experience working with alerting tools like Datadog to set up and monitor application and pipeline alerts. Familiarity with data engineering concepts and pipeline management. Experience with cloud-based environments (AWS, GCP, Azure). Additional Information: On-call support : This role requires being on-call in rotational shifts for a minimum of 2 days per week, including weekends and holidays, to address any issues related to applications or data pipelines. The role can be performed remotely as long as you have internet access. Vacation : Vacation plans must be coordinated with a substitute, as required. If you're looking for an exciting opportunity to support cutting-edge applications in a dynamic and collaborative team, we would love to hear from you! QuinStreet is an equal opportunity employer. We do not discriminate on the basis of race, color, religion, national origin, pregnancy status, sex, age, marital status, disability, sexual orientation, gender identity or any other characteristics protected by law. Please see QuinStreet’s Employee Privacy Notice here. Show more Show less
Posted 2 weeks ago
125.0 years
0 Lacs
Pune, Maharashtra, India
On-site
At Roche you can show up as yourself, embraced for the unique qualities you bring. Our culture encourages personal expression, open dialogue, and genuine connections, where you are valued, accepted and respected for who you are, allowing you to thrive both personally and professionally. This is how we aim to prevent, stop and cure diseases and ensure everyone has access to healthcare today and for generations to come. Join Roche, where every voice matters. The Position In Roche Informatics, we build on Roche’s 125-year history as one of the world’s largest biotech companies, globally recognized for providing transformative innovative solutions across major disease areas. We combine human capabilities with cutting-edge technological innovations to do now what our patients need next. Our commitment to our patients’ needs motivates us to deliver technology that evolves the practice of medicine. Be part of our inclusive team at Roche Informatics, where we’re driven by a shared passion for technological novelties and optimal IT solutions. Position Overview We are seeking an experienced ETL Architect to design, develop, and optimize data extraction, transformation, and loading (ETL) solutions and to work closely with multi-disciplinary and multi-cultural teams to build structured, high-quality data solutions. The person may be leading technical squads. These solutions will be leveraged across Enterprise , Pharma and Diagnostics solutions to help our teams fulfill our mission: to do now what patients need next. This role requires deep expertise in Python, AWS Cloud, and ETL tools to build and maintain scalable data pipelines and architectures. The ETL Architect will work closely with cross-functional teams to ensure efficient data integration, storage, and accessibility for business intelligence and analytics. Key Responsibilities ETL Design & Development: Architect and implement high-performance ETL pipelines using AWS cloud services, Snowflake, and ETL tools such as Talend, Dbt, Informatica, ADF etc Data Architecture: Design and implement scalable, efficient, and cloud-native data architectures Data Integration & Flow: Ensure seamless data integration across multiple source systems, leveraging AWS Glue, Snowflake, and other ETL tools Performance Optimization: Monitor and tune ETL processes for performance, scalability, and cost-effectiveness Governance & Security: Establish and enforce data quality, governance, and security standards for ETL processes Collaboration: Work with data engineers, analysts, and business stakeholders to define data requirements and ensure effective solutions Documentation & Best Practices: Maintain comprehensive documentation and promote best practices for ETL development and data transformation Troubleshooting & Support: Diagnose and resolve performance issues, failures, and bottlenecks in ETL processes Required Qualifications Education: Bachelor's or Master’s degree in Computer Science, Information Technology, Data Engineering, or related field Experience: 6+ years of experience in ETL development, with 3+ years in an ETL architecture role Expertise in Snowflake or any MPP data warehouse (including Snowflake data modeling, optimization, and security best practices) Strong experience with AWS Cloud services, especially AWS Glue, AWS Lambda, S3, Redshift, and IAM or Azure/GCP cloud services Proficiency in ETL tools such as Informatica, Talend, Apache NiFi, SSIS, or DataStage Strong SQL skills and experience with relational and NoSQL databases Experience in API integrations Proficiency in scripting languages (Python, Shell, PowerShell) for automation Prior experience in Pharmaceutical or Diagnostics or healthcare domain is a plus Soft Skills Strong analytical and problem-solving abilities Excellent communication and documentation skills Ability to work collaboratively in a fast-paced, cloud-first environment Preferred Qualifications Certifications in AWS, Snowflake, or ETL tools Experience in real-time data streaming, microservices-based architectures, and DevOps for data pipelines Knowledge of data governance, compliance (GDPR, HIPAA), and security best practices Who we are A healthier future drives us to innovate. Together, more than 100’000 employees across the globe are dedicated to advance science, ensuring everyone has access to healthcare today and for generations to come. Our efforts result in more than 26 million people treated with our medicines and over 30 billion tests conducted using our Diagnostics products. We empower each other to explore new possibilities, foster creativity, and keep our ambitions high, so we can deliver life-changing healthcare solutions that make a global impact. Let’s build a healthier future, together. Roche is an Equal Opportunity Employer. Show more Show less
Posted 2 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
The job market for redshift professionals in India is growing rapidly as more companies adopt cloud data warehousing solutions. Redshift, a powerful data warehouse service provided by Amazon Web Services, is in high demand due to its scalability, performance, and cost-effectiveness. Job seekers with expertise in redshift can find a plethora of opportunities in various industries across the country.
The average salary range for redshift professionals in India varies based on experience and location. Entry-level positions can expect a salary in the range of INR 6-10 lakhs per annum, while experienced professionals can earn upwards of INR 20 lakhs per annum.
In the field of redshift, a typical career path may include roles such as: - Junior Developer - Data Engineer - Senior Data Engineer - Tech Lead - Data Architect
Apart from expertise in redshift, proficiency in the following skills can be beneficial: - SQL - ETL Tools - Data Modeling - Cloud Computing (AWS) - Python/R Programming
As the demand for redshift professionals continues to rise in India, job seekers should focus on honing their skills and knowledge in this area to stay competitive in the job market. By preparing thoroughly and showcasing their expertise, candidates can secure rewarding opportunities in this fast-growing field. Good luck with your job search!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.