Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
3.0 years
0 Lacs
Greater Kolkata Area
On-site
Mactores is a trusted leader among businesses in providing modern data platform solutions. Since 2008, Mactores have been enabling businesses to accelerate their value through automation by providing End-to-End Data Solutions that are automated, agile, and secure. We collaborate with customers to strategize, navigate, and accelerate an ideal path forward with a digital transformation via assessments, migration, or modernization. Mactores is seeking an AWS Data Engineer (Senior) to join our team. The ideal candidate will have extensive experience in PySpark and SQL, and have worked with data pipelines using Amazon EMR or Amazon Glue. The candidate must also have experience in data modeling and end-user querying using Amazon Redshift or Snowflake, Amazon Athena, Presto, and orchestration experience using Airflow. What you will do ? Develop and maintain data pipelines using Amazon EMR or Amazon Glue. Create data models and end-user querying using Amazon Redshift or Snowflake, Amazon Athena, and Presto. Build and maintain the orchestration of data pipelines using Airflow. Collaborate with other teams to understand their data needs and help design solutions. Troubleshoot and optimize data pipelines and data models. Write and maintain PySpark and SQL scripts to extract, transform, and load data. Document and communicate technical solutions to both technical and non-technical audiences. Stay up-to-date with new AWS data technologies and evaluate their impact on our existing systems. What are we looking for? Bachelor's degree in Computer Science, Engineering, or a related field. 3+ years of experience working with PySpark and SQL. 2+ years of experience building and maintaining data pipelines using Amazon EMR or Amazon Glue. 2+ years of experience with data modeling and end-user querying using Amazon Redshift or Snowflake, Amazon Athena, and Presto. 1+ years of experience building and maintaining the orchestration of data pipelines using Airflow. Strong problem-solving and troubleshooting skills. Excellent communication and collaboration skills. Ability to work independently and within a team environment. You Are Preferred If You Have AWS Data Analytics Specialty Certification Experience with Agile development methodology Life at Mactores We care about creating a culture that makes a real difference in the lives of every Mactorian. Our 10 Core Leadership Principles that honor Decision-making, Leadership, Collaboration, and Curiosity drive how we work. Be one step ahead Deliver the best Be bold Pay attention to the detail Enjoy the challenge Be curious and take action Take leadership Own it Deliver value Be collaborative We would like you to read more details about the work culture on https://mactores.com/careers The Path to Joining the Mactores Team At Mactores, our recruitment process is structured around three distinct stages: Pre-Employment Assessment: You will be invited to participate in a series of pre-employment evaluations to assess your technical proficiency and suitability for the role. Managerial Interview: The hiring manager will engage with you in multiple discussions, lasting anywhere from 30 minutes to an hour, to assess your technical skills, hands-on experience, leadership potential, and communication abilities. HR Discussion: During this 30-minute session, you'll have the opportunity to discuss the offer and next steps with a member of the HR team. (ref:hirist.tech) Show more Show less
Posted 1 week ago
1.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Overview : . As a Product Data Analyst at SMS Magic, you will play a crucial role in driving product insights through the extraction, manipulation, and analysis of large datasets. You will develop and maintain interactive dashboards, analyze data from Google Analytics 4, and collaborate closely with Product Managers and various stakeholders. Your ability to manage multiple tasks, deliver actionable insights, and support data-driven decision-making processes will be key to your success in this role. Key Responsibilities Data Extraction and Analysis : Utilize SQL and databases such as Redshift to extract, manipulate, and analyze large datasets to drive product insights. Google Analytics : Analyze and interpret data from Google Analytics 4 to understand user behavior, product performance, and identify opportunities for improvement. Collaboration : Work closely with Product Managers to define product metrics, set goals, and track performance. Multitasking : Manage multiple tasks simultaneously, ensuring timely delivery of insights and analyses. Stakeholder Engagement : Collaborate with various teams including engineering, marketing, and sales to gather requirements and deliver actionable insights. Presentations : Present findings and recommendations to stakeholders, supporting data-driven decision-making processes. Qualifications Experience : Minimum of 1 year of experience working in a product company. Technical Skills Proficiency in SQL and experience with databases like Redshift. Hands-on experience with dashboarding tools such as Tableau, Looker, or Power BI. Analytics : Experience working with Google Analytics 4 data. Additional Skills : Basic knowledge of Python is a plus, but not mandatory. Soft Skills Excellent multitasking skills and ability to manage multiple priorities in a fast-paced environment. Strong collaboration skills with a proven track record of working effectively with Product Managers. Exceptional stakeholder management skills, with the ability to communicate complex data insights in a clear and concise manner. What working at SMS Magic Offers? At SMS Magic, people growth is parallel to company's growth and our work culture supports our commitment to creating a world class CRM messaging company. Our work culture is built on high-performance teaming where everyone can achieve their potential and contribute to building a better working world for our people and our clients. We offer a sense of balance, we want our people to be active, healthy, and happy, not just in their jobs but in their lives outside of work. Our competitive compensation package where you'll be rewarded based on your performance and recognized for the value you bring to our business. In addition, we do our best to make your time with us a rewarding learning experience that helps you grow as an individual. Plus, We Offer The freedom and flexibility to handle your role in a way that's right for you. Gain exposure to a dynamic and growing global business environment. Exposure to innovative and cutting-edge technology and tools. Scope to showcase one's analytical capabilities and make high impact contributions to Business teams. (ref:hirist.tech) Show more Show less
Posted 1 week ago
5.0 - 8.0 years
0 Lacs
Sadar, Uttar Pradesh, India
On-site
Python Developer Pune / Gurgaon, Hybrid Job Title : Python Developer Experience : 5-8 Years Location : Pune/ Gurgaon -hybrid Notice Period : Immediate to 15days Mandatory skills : Python, Java and JavaScript, AWS-Yrs experience (AWS Python Library like EC2, S3, Lambda, DynamoDB, SQS, SNS,) SQL Below are the key skills and qualifications we are looking for : Over 4 years of software development experience, with expertise in Python and familiarity with other programming languages such as Java and JavaScript. A minimum of 2 years of significant hands-on experience with AWS services, including Lambda and Step Functions. Domain knowledge in invoicing or billing is preferred, with experience on the Zuora platform (Billing and Revenue) being highly desirable. At least 2 years of working knowledge in SQL. Solid experience working with AWS cloud services, especially S3, Glue, Lambda, Redshift, and Athena. Experience with continuous integration/delivery (CI/CD) tools like Jenkins and Terraform. Excellent communication skills are essential. Design and implement backend services and APIs using Python. Build and maintain CI/CD pipelines using tools like GitHub Actions, AWS CodePipeline, or Jenkins. Optimize performance, scalability, and security of cloud applications. Implement logging, monitoring, and alerting for production workloads. Design, develop, and maintain scalable backend services using Python and Java. Develop responsive and user-friendly frontend interfaces using JavaScript, Collaborate with cross-functional teams to define, design, and deliver new features. Write clean, maintainable, and well-tested code. Troubleshoot, debug, and optimize application performance. Participate in code reviews and follow best development practices. (ref:hirist.tech) Show more Show less
Posted 1 week ago
3.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Key Responsibilities Design, develop, and maintain scalable data pipelines and architectures using AWS services. Implement ETL/ELT processes using AWS Glue, Lambda, and Step Functions. Work with structured and unstructured data across S3, Redshift, and other AWS data services. Develop data integration workflows to collect, process, and store data efficiently. Optimize performance and cost of data pipelines. Monitor and troubleshoot data pipeline failures using CloudWatch and related tools. Collaborate with data analysts, data scientists, and other stakeholders to ensure data availability and quality. Apply best practices for security and governance of data assets on Skills : 3+ years of experience in Python, SQL, and PySpark. 2+ years of experience with AWS services such as : AWS Glue AWS Lambda Amazon S3 Amazon EC2 Amazon Redshift CloudWatch Experience in building and maintaining ETL pipelines. Knowledge of data lake and data warehouse architecture. Familiarity with DevOps tools and CI/CD pipelines is a plus. Good understanding of data governance and security best practices on AWS. Preferred Qualifications AWS Certified Data Analytics Specialty or AWS Certified Solutions Architect. Experience with other cloud platforms (Azure, GCP) is a plus. Exposure to tools like Apache Airflow, Kafka, or Snowflake is an added advantage. (ref:hirist.tech) Show more Show less
Posted 1 week ago
46.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Key Responsibilities Design and develop robust ETL/ELT pipelines using Python and AWS Glue / Lambda Work with AWS services such as S3, Athena, Redshift, Glue, Step Functions, and CloudWatch Build and maintain data integration processes between internal and external data sources Optimize data pipelines for performance, scalability, and reliability Implement data quality checks and monitoring Collaborate with data analysts, engineers, and product teams to meet data requirements Maintain proper documentation and ensure best practices in data engineering Work with structured and semi-structured data formats (JSON, Parquet, Skills : 46 years of experience as a Data Engineer Strong programming skills in Python (Pandas, Boto3, PySpark) Proficient in SQL and performance tuning Hands-on experience with AWS services : S3, Glue, Lambda, Athena, Redshift, Step Functions, CloudWatch Experience working with Databricks or EMR is a plus Experience in data lake and data warehouse concepts Familiar with version control systems like Git Knowledge of CI/CD pipelines and workflow tools (Airflow is a plus) (ref:hirist.tech) Show more Show less
Posted 1 week ago
5.0 - 10.0 years
6 - 15 Lacs
Bengaluru
Work from Office
Urgent Hiring _ Azure Data Engineer with a leading Management Consulting Company @ Bangalore Location. Strong expertise in Databricks & Pyspark while dealing with batch processing or live (streaming) data sources. 4+ relevant years of experience in Databricks & Pyspark/Scala 7+ total years of experience Good in data modelling and designing. Ctc- Hike Shall be considered on Current/Last Drawn Pay Apply - rohita.robert@adecco.com Has worked on real data challenges and handled high volume, velocity, and variety of data. Excellent analytical & problem-solving skills, willingness to take ownership and resolve technical challenges. Contributes to community building initiatives like CoE, CoP. Mandatory skills: Azure - Master ELT - Skill Data Modeling - Skill Data Integration & Ingestion - Skill Data Manipulation and Processing - Skill GITHUB, Action, Azure DevOps - Skill Data factory, Databricks, SQL DB, Synapse, Stream Analytics, Glue, Airflow, Kinesis, Redshift, SonarQube, PyTest - Skill
Posted 1 week ago
4.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
The Data Engineer is accountable for developing high quality data products to support the Bank’s regulatory requirements and data driven decision making. A Data Engineer will serve as an example to other team members, work closely with customers, and remove or escalate roadblocks. By applying their knowledge of data architecture standards, data warehousing, data structures, and business intelligence they will contribute to business outcomes on an agile team. Responsibilities Developing and supporting scalable, extensible, and highly available data solutions Deliver on critical business priorities while ensuring alignment with the wider architectural vision Identify and help address potential risks in the data supply chain Follow and contribute to technical standards Design and develop analytical data models Required Qualifications & Work Experience First Class Degree in Engineering/Technology (4-year graduate course) 9 to 11 years’ experience implementing data-intensive solutions using agile methodologies Experience of relational databases and using SQL for data querying, transformation and manipulation Experience of modelling data for analytical consumers Ability to automate and streamline the build, test and deployment of data pipelines Experience in cloud native technologies and patterns A passion for learning new technologies, and a desire for personal growth, through self-study, formal classes, or on-the-job training Excellent communication and problem-solving skills An inclination to mentor; an ability to lead and deliver medium sized components independently Technical Skills (Must Have) ETL: Hands on experience of building data pipelines. Proficiency in two or more data integration platforms such as Ab Initio, Apache Spark, Talend and Informatica Big Data: Experience of ‘big data’ platforms such as Hadoop, Hive or Snowflake for data storage and processing Data Warehousing & Database Management Expertise around Data : Warehousing concepts, Relational (Oracle, MSSQL, MySQL) and NoSQL (MongoDB, DynamoDB) database design Data Modeling & Design: Good exposure to data modeling techniques; design, optimization and maintenance of data models and data structures Languages: Proficient in one or more programming languages commonly used in data engineering such as Python, Java or Scala DevOps: Exposure to concepts and enablers - CI/CD platforms, version control, automated quality control management Data Governance: A strong grasp of principles and practice including data quality, security, privacy and compliance Technical Skills (Valuable) Ab Initio: Experience developing Co>Op graphs; ability to tune for performance. Demonstrable knowledge across full suite of Ab Initio toolsets e.g., GDE, Express>IT, Data Profiler and Conduct>IT, Control>Center, Continuous>Flows Cloud: Good exposure to public cloud data platforms such as S3, Snowflake, Redshift, Databricks, BigQuery, etc. Demonstratable understanding of underlying architectures and trade-offs Data Quality & Controls: Exposure to data validation, cleansing, enrichment and data controls Containerization: Fair understanding of containerization platforms like Docker, Kubernetes. File Formats: Exposure in working on Event/File/Table Formats such as Avro, Parquet, Protobuf, Iceberg, Delta Others: Experience of using a Job scheduler e.g., Autosys. Exposure to Business Intelligence tools e.g., Tableau, Power BI Certification on any one or more of the above topics would be an advantage. ------------------------------------------------------ Job Family Group: Technology ------------------------------------------------------ Job Family: Digital Software Engineering ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Most Relevant Skills Please see the requirements listed above. ------------------------------------------------------ Other Relevant Skills For complementary skills, please see above and/or contact the recruiter. ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster. Show more Show less
Posted 1 week ago
5.0 years
0 Lacs
India
Remote
Now Hiring: Senior Data Engineer (Database Specialist) Location: Remote-India (Candidates in India only apply) Job Type: Full-Time Role Description This is a full-time remote role for a Data Engineer (Database Specialist). The Data Engineer will be responsible for designing and implementing data models, developing and maintaining data pipelines, and extracting, transforming, and loading (ETL) data. In addition, the role includes designing and managing data warehouses and performing data analytics to support business decisions. Key Responsibilities Design, optimize, and manage databases that support large-scale data Develop reliable ETL/ELT pipelines for transforming structured and unstructured device data Ensure data consistency, traceability, and performance across systems Work with time-series and event-based data Collaborate with QA, engineering, and product teams to integrate analytics into the product Implement data quality checks, audit trails, and compliance safeguards Support cloud data infrastructure in AWS, with tools like Redshift, RDS, S3 Must-Have Skills 5+ years in data engineering or backend-focused development Strong command of SQL and schema design (PostgreSQL, MySQL) Hands-on with NoSQL systems (MongoDB, DynamoDB) Python-based ETL scripting (Pandas, PySpark preferred) Experience with cloud-based data platforms (AWS) Familiarity with SaaS application data workflows. Understanding of security, privacy, and data compliance practices Nice to Have Certifications in AWS Data Engineering Apply now or refer someone in your network! Drop Resume- Rikhi@Sachhsoft.com Show more Show less
Posted 1 week ago
13.0 years
0 Lacs
Andhra Pradesh, India
On-site
Summary about Organization A career in our Advisory Acceleration Center is the natural extension of PwC’s leading global delivery capabilities. The team consists of highly skilled resources that can assist in the areas of helping clients transform their business by adopting technology using bespoke strategy, operating model, processes and planning. You’ll be at the forefront of helping organizations around the globe adopt innovative technology solutions that optimize business processes or enable scalable technology. Our team helps organizations transform their IT infrastructure, modernize applications and data management to help shape the future of business. An essential and strategic part of Advisory's multi-sourced, multi-geography Global Delivery Model, the Acceleration Centers are a dynamic, rapidly growing component of our business. The teams out of these Centers have achieved remarkable results in process quality and delivery capability, resulting in a loyal customer base and a reputation for excellence. . Job Description Senior Data Architect with experience in design, build, and optimization of complex data landscapes and legacy modernization projects. The ideal candidate will have deep expertise in database management, data modeling, cloud data solutions, and ETL (Extract, Transform, Load) processes. This role requires a strong leader capable of guiding data teams and driving the design and implementation of scalable data architectures. Key areas of expertise include Design and implement scalable and efficient data architectures to support business needs. Develop data models (conceptual, logical, and physical) that align with organizational goals. Lead the database design and optimization efforts for structured and unstructured data. Establish ETL pipelines and data integration strategies for seamless data flow. Define data governance policies, including data quality, security, privacy, and compliance. Work closely with engineering, analytics, and business teams to understand requirements and deliver data solutions. Oversee cloud-based data solutions (AWS, Azure, GCP) and modern data warehouses (Snowflake, BigQuery, Redshift). Ensure high availability, disaster recovery, and backup strategies for critical databases. Evaluate and implement emerging data technologies, tools, and frameworks to improve efficiency. Conduct data audits, performance tuning, and troubleshooting to maintain optimal performance Qualifications Bachelor’s or Master’s degree in Computer Science, Information Systems, or a related field. 13+ years of experience in data modeling, including conceptual, logical, and physical data design. 5 – 8 years of experience in cloud data lake platforms such as AWS Lake Formation, Delta Lake, Snowflake or Google Big Query. Proven experience with NoSQL databases and data modeling techniques for non-relational data. Experience with data warehousing concepts, ETL/ELT processes, and big data frameworks (e.g., Hadoop, Spark). Hands-on experience delivering complex, multi-module projects in diverse technology ecosystems. Strong understanding of data governance, data security, and compliance best practices. Proficiency with data modeling tools (e.g., ER/Studio, ERwin, PowerDesigner). Excellent leadership and communication skills, with a proven ability to manage teams and collaborate with stakeholders. Preferred Skills Experience with modern data architectures, such as data fabric or data mesh. Knowledge of graph databases and modeling for technologies like Neo4j. Proficiency with programming languages like Python, Scala, or Java. Understanding of CI/CD pipelines and DevOps practices in data engineering. Show more Show less
Posted 1 week ago
4.0 years
0 Lacs
Pune, Maharashtra, India
On-site
The Role The Data Engineer is accountable for developing high quality data products to support the Bank’s regulatory requirements and data driven decision making. A Mantas Scenario Developer will serve as an example to other team members, work closely with customers, and remove or escalate roadblocks. By applying their knowledge of data architecture standards, data warehousing, data structures, and business intelligence they will contribute to business outcomes on an agile team. Responsibilities Developing and supporting scalable, extensible, and highly available data solutions Deliver on critical business priorities while ensuring alignment with the wider architectural vision Identify and help address potential risks in the data supply chain Follow and contribute to technical standards Design and develop analytical data models Required Qualifications & Work Experience First Class Degree in Engineering/Technology (4-year graduate course) 5 to 8 years’ experience implementing data-intensive solutions using agile methodologies Experience of relational databases and using SQL for data querying, transformation and manipulation Experience of modelling data for analytical consumers Hands on Mantas ( Oracle FCCM ) expert throughout the full development life cycle, including: requirements analysis, functional design, technical design, programming, testing, documentation, implementation, and on-going technical support Translate business needs (BRD) into effective technical solutions and documents (FRD/TSD) Ability to automate and streamline the build, test and deployment of data pipelines Experience in cloud native technologies and patterns A passion for learning new technologies, and a desire for personal growth, through self-study, formal classes, or on-the-job training Excellent communication and problem-solving skills T echnical Skills (Must Have) ETL: Hands on experience of building data pipelines. Proficiency in two or more data integration platforms such as Ab Initio, Apache Spark, Talend and Informatica Mantas: Expert in Oracle Mantas/FCCM, Scenario Manager, Scenario Development, thorough knowledge and hands on experience in Mantas FSDM, DIS, Batch Scenario Manager Big Data: Experience of ‘big data’ platforms such as Hadoop, Hive or Snowflake for data storage and processing Data Warehousing & Database Management: Understanding of Data Warehousing concepts, Relational (Oracle, MSSQL, MySQL) and NoSQL (MongoDB, DynamoDB) database design Data Modeling & Design: Good exposure to data modeling techniques; design, optimization and maintenance of data models and data structures Languages: Proficient in one or more programming languages commonly used in data engineering such as Python, Java or Scala DevOps: Exposure to concepts and enablers - CI/CD platforms, version control, automated quality control management Technical Skills (Valuable) Ab Initio: Experience developing Co>Op graphs; ability to tune for performance. Demonstrable knowledge across full suite of Ab Initio toolsets e.g., GDE, Express>IT, Data Profiler and Conduct>IT, Control>Center, Continuous>Flows Cloud: Good exposure to public cloud data platforms such as S3, Snowflake, Redshift, Databricks, BigQuery, etc. Demonstratable understanding of underlying architectures and trade-offs Data Quality & Controls: Exposure to data validation, cleansing, enrichment and data controls Containerization: Fair understanding of containerization platforms like Docker, Kubernetes File Formats: Exposure in working on Event/File/Table Formats such as Avro, Parquet, Protobuf, Iceberg, Delta Others: Basics of Job scheduler like Autosys. Basics of Entitlement management Certification on any of the above topics would be an advantage. ------------------------------------------------------ Job Family Group: Technology ------------------------------------------------------ Job Family: Digital Software Engineering ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Most Relevant Skills Please see the requirements listed above. ------------------------------------------------------ Other Relevant Skills For complementary skills, please see above and/or contact the recruiter. ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster. Show more Show less
Posted 1 week ago
4.0 years
0 Lacs
Pune, Maharashtra, India
On-site
The Role The Data Engineer is accountable for developing high quality data products to support the Bank’s regulatory requirements and data driven decision making. A Mantas Scenario Developer will serve as an example to other team members, work closely with customers, and remove or escalate roadblocks. By applying their knowledge of data architecture standards, data warehousing, data structures, and business intelligence they will contribute to business outcomes on an agile team. Responsibilities Developing and supporting scalable, extensible, and highly available data solutions Deliver on critical business priorities while ensuring alignment with the wider architectural vision Identify and help address potential risks in the data supply chain Follow and contribute to technical standards Design and develop analytical data models Required Qualifications & Work Experience First Class Degree in Engineering/Technology (4-year graduate course) 5 to 8 years’ experience implementing data-intensive solutions using agile methodologies Experience of relational databases and using SQL for data querying, transformation and manipulation Experience of modelling data for analytical consumers Hands on Mantas ( Oracle FCCM ) expert throughout the full development life cycle, including: requirements analysis, functional design, technical design, programming, testing, documentation, implementation, and on-going technical support Translate business needs (BRD) into effective technical solutions and documents (FRD/TSD) Ability to automate and streamline the build, test and deployment of data pipelines Experience in cloud native technologies and patterns A passion for learning new technologies, and a desire for personal growth, through self-study, formal classes, or on-the-job training Excellent communication and problem-solving skills T echnical Skills (Must Have) ETL: Hands on experience of building data pipelines. Proficiency in two or more data integration platforms such as Ab Initio, Apache Spark, Talend and Informatica Mantas: Expert in Oracle Mantas/FCCM, Scenario Manager, Scenario Development, thorough knowledge and hands on experience in Mantas FSDM, DIS, Batch Scenario Manager Big Data: Experience of ‘big data’ platforms such as Hadoop, Hive or Snowflake for data storage and processing Data Warehousing & Database Management: Understanding of Data Warehousing concepts, Relational (Oracle, MSSQL, MySQL) and NoSQL (MongoDB, DynamoDB) database design Data Modeling & Design: Good exposure to data modeling techniques; design, optimization and maintenance of data models and data structures Languages: Proficient in one or more programming languages commonly used in data engineering such as Python, Java or Scala DevOps: Exposure to concepts and enablers - CI/CD platforms, version control, automated quality control management Technical Skills (Valuable) Ab Initio: Experience developing Co>Op graphs; ability to tune for performance. Demonstrable knowledge across full suite of Ab Initio toolsets e.g., GDE, Express>IT, Data Profiler and Conduct>IT, Control>Center, Continuous>Flows Cloud: Good exposure to public cloud data platforms such as S3, Snowflake, Redshift, Databricks, BigQuery, etc. Demonstratable understanding of underlying architectures and trade-offs Data Quality & Controls: Exposure to data validation, cleansing, enrichment and data controls Containerization: Fair understanding of containerization platforms like Docker, Kubernetes File Formats: Exposure in working on Event/File/Table Formats such as Avro, Parquet, Protobuf, Iceberg, Delta Others: Basics of Job scheduler like Autosys. Basics of Entitlement management Certification on any of the above topics would be an advantage. ------------------------------------------------------ Job Family Group: Technology ------------------------------------------------------ Job Family: Digital Software Engineering ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Most Relevant Skills Please see the requirements listed above. ------------------------------------------------------ Other Relevant Skills For complementary skills, please see above and/or contact the recruiter. ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster. Show more Show less
Posted 1 week ago
0 years
0 Lacs
India
Remote
About the job Droisys is an innovation technology company focused on helping companies accelerate their digital initiatives from strategy and planning through execution. We leverage deep technical expertise, Agile methodologies, and data-driven intelligence to modernize systems of engagement and simplify human/tech interaction. Amazing things happen when we work in environments where everyone feels a true sense of belonging and when candidates have the requisite skills and opportunities to succeed. At Droisys, we invest in our talent and support career growth, and we are always on the lookout for amazing talent who can contribute to our growth by delivering top results for our clients. Join us to challenge yourself and accomplish work that matters. Job Title: Data Engineer with Cortex ( or Similar AI experience) Location: India / Remote Key Responsibilities: Design and implement ETL/ELT workflows using Snowflake and Python. Build scalable and secure data pipelines using AWS services such as S3, Lambda, Glue, Redshift, and EMR. Develop, manage, and optimize data models and warehouse architecture in Snowflake. Leverage Cortex for deploying and managing machine learning workflows or real-time data processing (based on your use case). Utilize Docker and container orchestration tools (e.g., Kubernetes) for deploying data applications. Collaborate with cross-functional teams to understand data requirements and ensure high data quality and availability. Monitor and troubleshoot performance issues across data infrastructure. Droisys is an equal opportunity employer. We do not discriminate based on race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law. Droisys believes in diversity, inclusion, and belonging, and we are committed to fostering a diverse work environment. Show more Show less
Posted 1 week ago
2.0 - 5.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
We are looking for an experienced Data Engineer having experience in building large-scale data pipelines and data lake ecosystems. Our daily work is around solving interesting and exciting problems against high engineering standards. Even though you will be a part of the backend team, you will be working with cross-functional teams across the org. This role demands good hands-on different programming languages, especially Python, and the knowledge of technologies like Kafka, AWS Glue, Cloudformation, ECS, etc. You will be spending most of your time on facilitating seamless streaming, tracking, and collaborating huge data sets. This is a back-end role, but not limited to it. You will be working closely with producers and consumers of the data and build optimal solutions for the organization. Will appreciate a person with lots of patience and data understanding. Also, we believe in extreme ownership! Design and build systems to efficiently move data across multiple systems and make it available for various teams like Data Science, Data Analytics, and Product. Design, construct, test, and maintain data management systems. Understand data and business metrics required by the product and architect the systems to make that data available in a usable/queryable manner. Ensure that all systems meet the business/company requirements as well as best industry practices. Keep ourselves abreast of new technologies in our domain. Recommend different ways to constantly improve data reliability and quality. Bachelors/Masters, Preferably in Computer Science or a related technical field. 2-5 years of relevant experience. Deep knowledge and working experience of Kafka ecosystem. Good programming experience, preferably in Python, Java, Go, and a willingness to learn more. Experience in working with large sets of data platforms. Strong knowledge of microservices, data warehouse, and data lake systems in the cloud, especially AWS Redshift, S3, and Glue. Strong hands-on experience in writing complex and efficient ETL jobs. Experience in version management systems (preferably with Git). Strong analytical thinking and communication. Passion for finding and sharing best practices and driving discipline for superior data quality and integrity. Intellectual curiosity to find new and unusual ways of how to solve data management issues. Show more Show less
Posted 1 week ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Position: BI Developer and Data Analyst: Skills: Power BI, Databricks, SQL, Python, ETL, RedShift or Athena, AWS Services (beyond QuickSight) Experience: 4+ Responsibilities Design, develop, and maintain interactive and insightful dashboards using QuickSight. Conduct advanced data analysis to identify trends, patterns, and anomalies, providing meaningful interpretations and recommendations. Collaborate with stakeholders across different departments to understand their data needs and translate them into effective analytical solutions. Write and optimize SQL queries to extract, transform, and load data from various data sources. Utilize Python for data manipulation, automation of tasks, and statistical analysis. Ensure data accuracy, integrity, and consistency across all dashboards and analyses. Document dashboard specifications, data sources, and analytical methodologies. Stay up-to-date with the latest trends and best practices in data visualization and analytics. Qualifications Bachelor's degree in a quantitative field such as Data Science, Statistics, Mathematics, Computer Science, or a related discipline. Required Skills Data visualization best practices. Proven experience in developing advanced dashboards and performing data analysis. Ability to create clear, intuitive, and impactful visualizations (charts, graphs, tables, KPIs) that effectively communicate insights. Extensive experience with AWS QuickSight (or similar BI tool): Hands-on experience in building, publishing, and maintaining interactive dashboards and reports. QuickSight data sources: Experience connecting QuickSight to various data sources, especially those common in AWS environments (e.g., S3, Redshift, Athena, RDS, Glue). QuickSight dataset creation and management: Proficiency in creating, transforming, and optimizing datasets within QuickSight, including calculated fields, parameters, and filters. Performance optimization: Knowledge of how to optimize QuickSight dashboards and data for speed and scalability. Preferred Skills Experience with other data visualization tools. Familiarity with machine learning concepts. Show more Show less
Posted 1 week ago
0 years
0 Lacs
India
Remote
Step into the world of AI innovation with the Deccan AI Experts Community (By Soul AI), where you become a creator, not just a consumer. We are reaching out to the top 1% of Soul AI’s Data Visualization Engineers like YOU for a unique job opportunity to work with the industry leaders. What’s in it for you? pay above market standards The role is going to be contract based with project timelines from 2 - 6 months, or freelancing. Be a part of an Elite Community of professionals who can solve complex AI challenges. Work location could be: Remote Onsite on client location: US, UAE, UK, India etc. Deccan AI’s Office: Hyderabad or Bangalore Responsibilities: Architect and implement enterprise-level BI solutions to support strategic decision-making along with data democratization by enabling self-service analytics for non-technical users. Lead data governance and data quality initiatives to ensure consistency and design data pipelines and automated reporting solutions using SQL and Python. Optimize big data queries and analytics workloads for cost efficiency and Implement real-time analytics dashboards and interactive reports. Mentor junior analysts and establish best practices for data visualization. Required Skills: Advanced SQL, Python (Pandas, NumPy), and BI tools (Tableau, Power BI, Looker). Expertise in AWS (Athena, Redshift), GCP (Big Query), or Snowflake. Experience with data governance, lineage tracking, and big data tools (Spark, Kafka). Exposure to machine learning and AI-powered analytics. Nice to Have: Experience with graph analytics, geospatial data, and visualization libraries (D3.js, Plotly). Hands-on experience with BI automation and AI-driven analytics. Who can be a part of the community? We are looking for top-tier Data Visualization Engineers with expertise in analyzing and visualizing complex datasets. Proficiency in SQL, Tableau, Power BI, and Python (Pandas, NumPy, Matplotlib) is a plus. If you have experience in this field then this is your chance to collaborate with industry leaders. What are the next steps? 1. Register on our Soul AI website. 2. Our team will review your profile. 3. Clear all the screening rounds: Clear the assessments once you are shortlisted. As soon as you qualify all the screening rounds (assessments, interviews) you will be added to our Expert Community! 4. Profile matching: Be patient while we align your skills and preferences with the available project. 5 . Project Allocation: You’ll be deployed on your preferred project! Skip the Noise. Focus on Opportunities Built for You! Show more Show less
Posted 1 week ago
8.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. The opportunity We are the only professional services organization who has a separate business dedicated exclusively to the financial services marketplace. Join Digital Engineering Team and you will work with multi-disciplinary teams from around the world to deliver a global perspective. Aligned to key industry groups including Asset management, Banking and Capital Markets, Insurance and Private Equity, Health, Government, Power and Utilities, we provide integrated advisory, assurance, tax, and transaction services. Through diverse experiences, world-class learning and individually tailored coaching you will experience ongoing professional development. That’s how we develop outstanding leaders who team to deliver on our promises to all of our stakeholders, and in so doing, play a critical role in building a better working world for our people, for our clients and for our communities. Sound interesting? Well, this is just the beginning. Because whenever you join, however long you stay, the exceptional EY experience lasts a lifetime. We’re seeking a versatile Full Stack Architect with hands-on experience in Python (including multithreading and popular libraries) ,GenAI and AWS cloud services. The ideal candidate should be proficient in backend development using NodeJS, ExpressJS, Python Flask/FastAPI, and RESTful API design. On the frontend, strong skills in Angular, ReactJS, TypeScript, etc.EY Digital Engineering is a unique, industry-focused business unit that provides a broad range of integrated services that leverage deep industry experience with strong functional capability and product knowledge. The Digital Engineering (DE) practice works with clients to analyse, formulate, design, mobilize and drive digital transformation initiatives. We advise clients on their most pressing digital challenges and opportunities surround business strategy, customer, growth, profit optimization, innovation, technology strategy, and digital transformation. We also have a unique ability to help our clients translate strategy into actionable technical design, and transformation planning/mobilization. Through our unique combination of competencies and solutions, EY’s DE team helps our clients sustain competitive advantage and profitability by developing strategies to stay ahead of the rapid pace of change and disruption and supporting the execution of complex transformations. Your Key Responsibilities Application Development: Design and develop cloud-native applications and services using AWS services such as Lambda, API Gateway, ECS, EKS, and DynamoDB, Glue, Redshift, EMR. Deployment and Automation: Implement CI/CD pipelines using AWS CodePipeline, CodeBuild, and CodeDeploy to automate application deployment and updates. Architecture Design: Collaborate with architects and other engineers to design scalable and secure application architectures on AWS. Performance Tuning: Monitor application performance and implement optimizations to enhance reliability, scalability, and efficiency. Security: Implement security best practices for AWS applications, including identity and access management (IAM), encryption, and secure coding practices. Container Services Management: Design and deploy containerized applications using AWS services such as Amazon ECS (Elastic Container Service), Amazon EKS (Elastic Kubernetes Service), and AWS Fargate. Configure and manage container orchestration, scaling, and deployment strategies. Optimize container performance and resource utilization by tuning settings and configurations. Application Observability: Implement and manage application observability tools such as AWS CloudWatch, AWS X-Ray, Prometheus, Grafana, and ELK Stack (Elasticsearch, Logstash, Kibana). Develop and configure monitoring, logging, and alerting systems to provide insights into application performance and health. Create dashboards and reports to visualize application metrics and logs for proactive monitoring and troubleshooting. Integration: Integrate AWS services with application components and external systems, ensuring smooth and efficient data flow. Troubleshooting: Diagnose and resolve issues related to application performance, availability, and reliability. Documentation: Create and maintain comprehensive documentation for application design, deployment processes, and configuration. Skills And Attributes For Success Required Skills: AWS Services: Proficiency in AWS services such as Lambda, API Gateway, ECS, EKS, DynamoDB, S3, and RDS, Glue, Redshift, EMR. Backend: Python (multithreading, Flask, FastAPI), NodeJS, ExpressJS, REST APIs Frontend: Angular, ReactJS, TypeScript Cloud Engineering : Development with AWS (Lambda, EC2, S3, API Gateway, DynamoDB), Docker, Git, etc. Proven experience in developing and deploying AI solutions with Python, JavaScript Strong background in machine learning, deep learning, and data modelling. Good to have: CI/CD pipelines, full-stack architecture, unit testing, API integration Security: Understanding of AWS security best practices, including IAM, KMS, and encryption. Observability Tools: Proficiency in using observability tools like AWS CloudWatch, AWS X-Ray, Prometheus, Grafana, and ELK Stack. Container Orchestration: Knowledge of container orchestration concepts and tools, including Kubernetes and Docker Swarm. Monitoring: Experience with monitoring and logging tools such as AWS CloudWatch, CloudTrail, or ELK Stack. Collaboration: Strong teamwork and communication skills with the ability to work effectively with cross-functional teams. Preferred Qualifications: Certifications: AWS Certified Solutions Architect – Associate or Professional, AWS Certified Developer – Associate, or similar certifications. Experience: At least 8 Years of experience in an application engineering role with a focus on AWS technologies. Agile Methodologies: Familiarity with Agile development practices and methodologies. Problem-Solving: Strong analytical skills with the ability to troubleshoot and resolve complex issues. Education: Degree: Bachelor’s degree in Computer Science, Engineering, Information Technology, or a related field, or equivalent practical experience What We Offer EY Global Delivery Services (GDS) is a dynamic and truly global delivery network. We work across six locations – Argentina, China, India, the Philippines, Poland and the UK – and with teams from all EY service lines, geographies and sectors, playing a vital role in the delivery of the EY growth strategy. From accountants to coders to advisory consultants, we offer a wide variety of fulfilling career opportunities that span all business disciplines. In GDS, you will collaborate with EY teams on exciting projects and work with well-known brands from across the globe. We’ll introduce you to an ever-expanding ecosystem of people, learning, skills and insights that will stay with you throughout your career. Continuous learning: You’ll develop the mindset and skills to navigate whatever comes next. Success as defined by you: We’ll provide the tools and flexibility, so you can make a meaningful impact, your way. Transformative leadership: We’ll give you the insights, coaching and confidence to be the leader the world needs. Diverse and inclusive culture: You’ll be embraced for who you are and empowered to use your voice to help others find theirs. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less
Posted 1 week ago
4.0 years
0 Lacs
Kochi, Kerala, India
On-site
Introduction In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology. Your Role And Responsibilities As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and AWS Cloud Data Platform Responsibilities Experienced in building data pipelines to Ingest, process, and transform data from files, streams and databases. Process the data with Spark, Python, PySpark, Scala, and Hive, Hbase or other NoSQL databases on Cloud Data Platforms (AWS) or HDFS Experienced in develop efficient software code for multiple use cases leveraging Spark Framework / using Python or Scala and Big Data technologies for various use cases built on the platform Experience in developing streaming pipelines Experience to work with Hadoop / AWS eco system components to implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Apache Spark, Kafka, any Cloud computing etc Preferred Education Master's Degree Required Technical And Professional Expertise Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala. Minimum 3 years of experience on Cloud Data Platforms on AWS; Experience in AWS EMR / AWS Glue / DataBricks, AWS RedShift, DynamoDB Good to excellent SQL skills Exposure to streaming solutions and message brokers like Kafka technologies. Preferred Technical And Professional Experience Certification in AWS and Data Bricks or Cloudera Spark Certified developers. Show more Show less
Posted 1 week ago
6.0 years
0 Lacs
Indore, Madhya Pradesh, India
On-site
Ascentt is building cutting-edge data analytics & AI/ML solutions for global automotive and manufacturing leaders. We turn enterprise data into real-time decisions using advanced machine learning and GenAI. Our team solves hard engineering problems at scale, with real-world industry impact. We’re hiring passionate builders to shape the future of industrial intelligence. Sr Tableau Developer 6+ years of experience in Tableau development JD Key Responsibilities Build and maintain complex Tableau dashboards with drill-down capabilities, filters, actions, and KPI indicators. Write advanced calculations like Level of Detail (LOD) expressions to address business logic such as aggregations at different dimensions. Design and implement table calculations for running totals, percent change, rankings, etc. Perform data blending and joins across multiple sources, ensuring data accuracy and integrity. Optimize Tableau workbook performance by managing extracts, minimizing dashboard load time, and tuning calculations. Use parameters, dynamic filters, and action filters for interactive user experiences. Design dashboard wireframes and prototypes using Tableau or other tools like Figma. Manage publishing, scheduling, and permissions in Tableau Server/Cloud. Collaborate with data engineering to design performant, scalable data sources. Document data logic, dashboard specs, and technical workflows for governance. Provide mentorship and technical guidance to junior Tableau developers. Experience in any BI Reporting Tool like PowerBI, Looker, Quicksight, Alteryx is a Plus Qualifications Bachelor’s or Master’s degree in Computer Science, Information Systems, Analytics, or a related field 6+ years of experience in Tableau development Tableau Desktop Certified Professional (preferred) Experience with enterprise BI projects and stakeholder engagement SQL proficiency: Ability to write complex joins, CTEs, subqueries, and window functions. Experience working with large datasets in tools like: Snowflake, Amazon Redshift, Google BigQuery, Azure Synapse, or SQL Server Data preparation tools experience (preferred but not required): Tableau Prep, Alteryx, dbt, or equivalent Knowledge of Tableau Server/Cloud administration (publishing, permissions, data source refreshes) Show more Show less
Posted 1 week ago
5.0 years
0 Lacs
Greater Chennai Area
On-site
Overview A Data Modeller is responsible for designing, implementing, and managing data models that support the strategic and operational needs of an organization. This role involves translating business requirements into data structures, ensuring consistency, accuracy, and efficiency in data storage and retrieval processes. Responsibilities Develop and maintain conceptual, logical, and physical data models. Collaborate with business analysts, data architects, and stakeholders to gather data requirements. Translate business needs into efficient database designs. Optimize and refine existing data models to support analytics and reporting. Ensure data models support data governance, quality, and security standards. Work closely with database developers and administrators on implementation. Document data models, metadata, and data flows. Requirements Bachelor’s or Master’s degree in Computer Science, Information Systems, Data Science, or related field. Data Modeling Tools: ER/Studio, ERwin, SQL Developer Data Modeler, or similar. Database Technologies: Proficiency in SQL and familiarity with databases like Oracle, SQL Server, MySQL, PostgreSQL. Data Warehousing: Experience with dimensional modeling, star and snowflake schemas. ETL Processes: Knowledge of Extract, Transform, Load processes and tools. Cloud Platforms: Familiarity with cloud data services (e.g., AWS Redshift, Azure Synapse, Google BigQuery). Metadata Management & Data Governance: Understanding of data cataloging and governance principles. Strong analytical and problem-solving skills. Excellent communication skills to work with business stakeholders and technical teams. Ability to document models clearly and explain complex data relationships. 5+ years in data modeling, data architecture, or related roles. Experience working in Agile or DevOps environments is often preferred. Understanding of normalization/denormalization. Experience with business intelligence and reporting tools. Familiarity with master data management (MDM) principles. Show more Show less
Posted 1 week ago
3.0 - 6.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Overview Cvent is a leading meetings, events, and hospitality technology provider with more than 4,800 employees and ~22,000 customers worldwide, including 53% of the Fortune 500. Founded in 1999, Cvent delivers a comprehensive event marketing and management platform for marketers and event professionals and offers software solutions to hotels, special event venues and destinations to help them grow their group/MICE and corporate travel business. Our technology brings millions of people together at events around the world. In short, we’re transforming the meetings and events industry through innovative technology that powers human connection. In This Role, You Will Design, develop, and manage databases on the AWS cloud platform Develop and maintain automation scripts or jobs to perform routine database tasks such as provisioning, backups, restores, and data migrations. Build and maintain automated testing frameworks for database changes and upgrades to minimize the risk of introducing errors. Implement self-healing mechanisms to automatically recover from database failures or performance degradation. Integrate database automation tools with CI/CD pipelines to enable continuous delivery and deployment of database changes. Collaborate with cross-functional teams to understand their data requirements and ensure that the databases meet their needs Implement and manage database security policies, including access control, data encryption, and backup and recovery procedures Ensure that database backups and disaster recovery procedures are in place and tested regularly Develop and maintain database documentation, including data dictionaries, data models, and technical specifications Stay up-to-date with the latest cloud technologies and trends and evaluate new tools and products that could improve database performance and scalability. Here's What You Need Bachelor's degree in Computer Science, Information Technology, or a related field Minimum of 3-6 years of experience in designing, building, and administering databases on the AWS cloud platform Strong experience with Infra as Code (CloudFormation/AWS CDK) and automation experience in Python In-depth knowledge of AWS database services such as Amazon RDS, EC2, S3, Amazon Aurora, and Amazon Redshift and Postgres/Mysql/SqlServer Strong understanding of database design principles, data modelling, and normalisation Experience with database migration to AWS cloud platform Strong understanding of database security principles and best practices Excellent troubleshooting and problem-solving skills Ability to work independently and in a team environment Good To Have AWS certifications such as AWS Certified Solutions Architect, AWS Certified DevOps Engineer, or AWS Certified Database Specialty are a plus. Show more Show less
Posted 1 week ago
3.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Description Amazon Prime is a program that provides millions of members with unlimited one-day delivery, unlimited streaming of video and music, secure online photo storage, access to kindle e-books as well as Prime special deals on Prime Day. In India, Prime members get unlimited free One-Day and Two-day delivery, video streaming and early and exclusive access to deals. After the launch in 2016, the Amazon Prime team is now looking for a detailed oriented business intelligence engineer to lead the business intelligence for Prime and drive member insights. At Amazon, we're always working to be the most customer-centric company on earth. To get there, we need exceptionally talented, bright, and driven people. We are looking for a dynamic, organized, and customer-focused Analytics expert to join our Amazon Prime Analytics team. The team supports the Amazon India Prime organization by producing and delivering metrics, data, models and strategic analyses. This is a highly challenging and make-shift role that requires an individual with excellent team leadership skills, business acumen, and the breadth to work across multiple Amazon Prime Business Teams, Data Engineering, Machine Learning and Software Development teams. A successful candidate will be a self-starter comfortable with ambiguity, strong attention to detail, and a proven ability to work in a fast-paced and ever-changing environment. Key job responsibilities Developing a long term analytical strategy and driving the implementation of that strategy. Using analytics to influence multiple departments, increasing their productivity and effectiveness to achieve strategic goals. Identify, develop, manage, and execute analyses to uncover areas of opportunity and present written business recommendations that help shape our business roadmap. Large scale data mining to find trends and problems, then communicate and drive corrective action. Ability to develop experimental and analytic plans for data modeling processes, use of strong baselines, ability to accurately determine cause and effect relations Apply Statistical and Machine Learning methods to specific business problems Partner with Machine Learning team in Model Building w.r.t. variable definition, model validation and creation of a targeting strategy Own the design, development, and maintenance of ongoing metrics, reports, analyses, dashboards, etc. to drive key business decisions across Amazon Prime business units. Basic Qualifications 3+ years of analyzing and interpreting data with Redshift, Oracle, NoSQL etc. experience Experience with data visualization using Tableau, Quicksight, or similar tools Experience with data modeling, warehousing and building ETL pipelines Experience in Statistical Analysis packages such as R, SAS and Matlab Experience using SQL to pull data from a database or data warehouse and scripting experience (Python) to process data for modeling Preferred Qualifications Experience with AWS solutions such as EC2, DynamoDB, S3, and Redshift Experience in data mining, ETL, etc. and using databases in a business environment with large-scale, complex datasets Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI - Karnataka Job ID: A2890009 Show more Show less
Posted 1 week ago
3.0 - 5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
The HiLabs Story HiLabs is a leading provider of AI-powered solutions to clean dirty data, unlocking its hidden potential for healthcare transformation. HiLabs is committed to transforming the healthcare industry through innovation, collaboration, and a relentless focus on improving patient outcomes. HiLabs Team Multidisciplinary industry leaders Healthcare domain experts AI/ML and data science experts Professionals hailing from the worlds best universities, business schools, and engineering institutes including Harvard, Yale, Carnegie Mellon, Duke, Georgia Tech, Indian Institute of Management (IIM), and Indian Institute of Technology (IIT). Be a part of a team that harnesses advanced AI, ML, and big data technologies to develop cutting-edge healthcare technology platform, delivering innovative business solutions. Job Title : Data Engineer I/II Job Location : Bangalore, Karnataka , India Job summary: We are a leading Software as a Service (SaaS) company that specializes in the transformation of data in the US healthcare industry through cutting-edge Artificial Intelligence (AI) solutions. We are looking for Software Developers, who should continually strive to advance engineering excellence and technology innovation. The mission is to power the next generation of digital products and services through innovation, collaboration, and transparency. You will be a technology leader and doer who enjoys working in a dynamic, fast-paced environment. Responsibilities Design, develop, and maintain robust and scalable ETL/ELT pipelines to ingest and transform large datasets from various sources. Optimize and manage databases (SQL/NoSQL) to ensure efficient data storage, retrieval, and manipulation for both structured and unstructured data. Collaborate with data scientists, analysts, and engineers to integrate data from disparate sources and ensure smooth data flow between systems. Implement and maintain data validation and monitoring processes to ensure data accuracy, consistency, and availability. Automate repetitive data engineering tasks and optimize data workflows for performance and scalability. Work closely with cross-functional teams to understand their data needs and provide solutions that help scale operations. Ensure proper documentation of data engineering processes, workflows, and infrastructure for easy maintenance and scalability Desired Profile Bachelor’s or Master’s degree in Computer Science, Information Technology, or a related field. 3-5 years of hands-on experience as a Data Engineer or in a related data-driven role. Strong experience with ETL tools like Apache Airflow, Talend, or Informatica. Expertise in SQL and NoSQL databases (e.g., MySQL, PostgreSQL, MongoDB, Cassandra). Strong proficiency in Python, Scala, or Java for data manipulation and pipeline development. Experience with cloud-based platforms (AWS, Google Cloud, Azure) and their data services (e.g., S3, Redshift, BigQuery). Familiarity with big data processing frameworks such as Hadoop, Spark, or Flink. Experience in data warehousing concepts and building data models (e.g., Snowflake, Redshift). Understanding of data governance, data security best practices, and data privacy regulations (e.g., GDPR, HIPAA). Familiarity with version control systems like Git.. HiLabs is an equal opportunity employer (EOE). No job applicant or employee shall receive less favorable treatment or be disadvantaged because of their gender, marital or family status, color, race, ethnic origin, religion, disability, or age; nor be subject to less favorable treatment or be disadvantaged on any other basis prohibited by applicable law. HiLabs is proud to be an equal opportunity workplace dedicated to pursuing and hiring a diverse and inclusive workforce to support individual growth and superior business results. Thank you for reviewing this opportunity with HiLabs! If this position appears to be a good fit for your skillset, we welcome your application. HiLabs Total Rewards Competitive Salary, Accelerated Incentive Policies, H1B sponsorship, Comprehensive benefits package that includes ESOPs, financial contribution for your ongoing professional and personal development, medical coverage for you and your loved ones, 401k, PTOs & a collaborative working environment, Smart mentorship, and highly qualified multidisciplinary, incredibly talented professionals from highly renowned and accredited medical schools, business schools, and engineering institutes. CCPA disclosure notice - https://www.hilabs.com/privacy Show more Show less
Posted 1 week ago
7.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Description Have you ever thought about what it takes to detect and prevent fraudulent activity among hundreds of millions of e-commerce transactions across the globe? What would you do to increase trust in an online marketplace where millions of buyers and sellers transact? How would you build systems that evolve over time to proactively identify and neutralize new and emerging fraud threats? Our mission in Buyer Risk Prevention is to make Amazon the safest place to transact online. Buyer Risk Prevention safeguards every financial transaction across all Amazon sites, while striving to ensure that these efforts are transparent to our legitimate customers. As such, Buyer Risk Prevention designs and builds the software systems, risk models and operational processes that minimize risk and maximize trust in Amazon.com. Have you ever thought about what it takes to detect and prevent fraudulent activity among hundreds of millions of e-commerce transactions across the globe? What would you do to increase trust in an online marketplace where millions of buyers and sellers transact? How would you build systems that evolve over time to proactively identify and neutralize new and emerging fraud threats? Our mission in Buyer Risk Prevention is to make Amazon the safest place to transact online. Buyer Risk Prevention safeguards every financial transaction across all Amazon sites, while striving to ensure that these efforts are transparent to our legitimate customers. As such, Buyer Risk Prevention designs and builds the software systems, risk models and operational processes that minimize risk and maximize trust in Amazon.com. Within BRP, we are looking for a leader for Payment Risk Operations. Our ideal candidate will be an experienced people leader who can thrive in an ambiguous and fast paced business landscape. You are passionate about working with complex datasets and are someone who loves to dive deep, analyze and turn data into insights. You will be responsible for analyzing terabytes of data to identify specific instances of risk, broader risk trends and points of customer friction, developing scalable solutions for prevention. You will be leader of leaders within the PRO Analytics teams leading a team of Business Analyst, MIS and BA managers. You should have deep expertise in analytic view of business questions, building up & refining metrics framework to measure business operation and translating data into meaningful insights using a breadth of tools and terabytes of data. In this role, you will have ownership of end-to-end analytics development to complex questions and you’ll play an integral role in strategic decision making. You should have excellent business and communication skills to be able to work with business owners to understand business challenges & opportunities, and to drive data-driven decision into process & tool improvement together with business team. You will need to collaborate effectively with business and product leaders within PRO and cross-functional teams across BRP to solve problems, create operational efficiencies, and deliver successfully against high organizational standards. In addition, you will be responsible for building a robust set of operational and business metrics and will utilize metrics to determine improvement opportunities. This is a high impact role with goals that directly impacts the bottom line of the business. Key job responsibilities Build and execute the strategy for Payment Risk Operations Analytics team Hire, manage, coach and lead a high performing team of Business Analysts Develop inferences using statistical rigor to simplify and inform the larger team of noteworthy findings that impact the business Build datasets, metrics, and KPIs supporting business Design and develop highly available dashboards and metrics using SQL and Excel/Quicksight or other BI reporting tools Perform business analysis and data queries using scripting languages like R, Python etc Design, implement and support end-to-end analytical solutions that are highly available, reliable, secure, and scale economically Collaborate cross-functionally to recognize and help adopt best practices in reporting and analysis, data integrity, test design, analysis, validation, and documentation Proactively identify problems and opportunities and perform root cause analysis/diagnosis leading to significant business impact Work closely with internal stakeholders such as Operations, Program Managers, Workforce, Capacity planning, machine learning, finance teams and partner teams to align them with respect to your focus area Own the delivery and backup of periodic metrics, dashboards and other reports to the leadership team Manage all aspects of BI projects such as project planning, requirements definition, risk management, communication, and implementation planning. Execute high priority (i.e. cross functional, high impact) projects to create robust, scalable analytics solutions and frameworks with the help of Analytics/BIE managers Basic Qualifications 7+ years of business intelligence and analytics experience 5+ years of delivering results managing a business intelligence or analytics team, including sprint planning, roadmap planning, employee development and performance management experience Experience creating complex SQL queries joining multiple datasets, ETL DW concepts Experience with Excel 5+ years using data visualization tools like Tableau, Quicksight or similar tools Experience with R, Python or other statistical/machine learning tools Experience demonstrating problem solving and root cause analysis Experience using databases with a large-scale data set Bachelor's degree in engineering, analytics, mathematics, statistics or a related technical or quantitative field Detail-oriented and must have an aptitude for solving unstructured problems. The role will require the ability to extract data from various sources and to design/construct/execute complex analyses to finally come up with data/reports that help solve the business problem Analytical mindset and ability to see the big picture and influence others Good oral, written and presentation skills combined with the ability to be part of group discussions with leadership and explaining complex solutions to non-tech audience Preferred Qualifications Experience in Amazon Redshift and other AWS technologies Experience scripting for automation (e.g., Python, Perl, Ruby) Experience in e-commerce / on-line companies in fraud / risk control functions Ability to apply analytical, computer, statistical and quantitative problem solving skills Ability to work effectively in a multi-task, high volume environment Ability to be adaptable and flexible in responding to deadlines and workflow fluctuations Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI - BLR 14 SEZ Job ID: A2966814 Show more Show less
Posted 1 week ago
3.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Description Translation Services Data and Analytics seeks a passionate Data Engineer to drive innovations in translation analytics space to create the data pipelines handling large volume data and help our customer's to analyze and understand Amazon Translation coverage across the languages.We support Translation Services in making data-driven decisions by providing easy access to data and self-serve analytics. We work closely with internal stakeholders and cross-functional teams to solve business problems through data by building data pipelines, develop automated reporting and dive deep into data to identify actionable root cause. Key job responsibilities Work closely with data scientists and business intelligence engineers to create robust data architectures and pipelines. Develop and manage scalable, automated, and fault-tolerant data solutions. Simplify and enhance the accessibility, clarity, and usability of large or complex datasets through the development of advanced ETL, BI dashboards and applications. Take ownership of the design, creation, and upkeep of metrics, reports, analyses, and dashboards to inform key business decisions. Navigate ambiguous environments by evaluating various options using both data-driven insights and business expertise. A day in the life Data Engineers focus on managing customer requests, maintaining operational excellence, and enhancing core data analytics infrastructure. You will be collaborating closely with both technical and non-technical teams to design and execute roadmaps for essential Translation Services metrics. If you are not sure that every qualification on the list above describes you exactly, we'd still love to hear from you! At Amazon, we value people with unique backgrounds, experiences, and skillsets. If you’re passionate about this role and want to make an impact on a global scale, please apply! Basic Qualifications 3+ years of data engineering experience Experience with data modeling, warehousing and building ETL pipelines Experience with SQL Experience in at least one modern scripting or programming language, such as Python, Java, Scala, or NodeJS Preferred Qualifications Experience with AWS technologies like Redshift, S3, AWS Glue, EMR, Kinesis, FireHose, Lambda, and IAM roles and permissions Experience with non-relational databases / data stores (object storage, document or key-value stores, graph databases, column-family databases) Bachelor's degree Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI HYD 13 SEZ Job ID: A2929407 Show more Show less
Posted 1 week ago
2.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Description When you attract people who have the DNA of pioneers and the DNA of explorers, you build a company of like-minded people who want to invent. And that’s what they think about when they get up in the morning: how are we going to work backwards from customers and build a great service or a great product” – Jeff Bezos Amazon.com’s success is built on a foundation of customer obsession. Have you ever thought about what it takes to successfully deliver millions of packages to Amazon customers seamlessly every day like a clock work? In order to make that happen, behind those millions of packages, billions of decision gets made by machines and humans. What is the accuracy of customer provided address? Do we know exact location of the address on Map? Is there a safe place? Can we make unattended delivery? Would signature be required? If the address is commercial property? Do we know open business hours of the address? What if customer is not home? Is there an alternate delivery address? Does customer have any special preference? What are other addresses that also have packages to be delivered on the same day? Are we optimizing delivery associate’s route? Does delivery associate know locality well enough? Is there an access code to get inside building? And the list simply goes on. At the core of all of it lies quality of underlying data that can help make those decisions in time. The person in this role will be a strong influencer who will ensure goal alignment with Technology, Operations, and Finance teams. This role will serve as the face of the organization to global stakeholders. This position requires a results-oriented, high-energy, dynamic individual with both stamina and mental quickness to be able to work and thrive in a fast-paced, high-growth global organization. Excellent communication skills and executive presence to get in front of VPs and SVPs across Amazon will be imperative. Key Strategic Objectives: Amazon is seeking an experienced leader to own the vision for quality improvement through global address management programs. As a Business Intelligence Engineer of Amazon last mile quality team, you will be responsible for shaping the strategy and direction of customer-facing products that are core to the customer experience. As a key member of the last mile leadership team, you will continually raise the bar on both quality and performance. You will bring innovation, a strategic perspective, a passionate voice, and an ability to prioritize and execute on a fast-moving set of priorities, competitive pressures, and operational initiatives. You will partner closely with product and technology teams to define and build innovative and delightful experiences for customers. You must be highly analytical, able to work extremely effectively in a matrix organization, and have the ability to break complex problems down into steps that drive product development at Amazon speed. You will set the tempo for defect reduction through continuous improvement and drive accountability across multiple business units in order to deliver large scale high visibility/ high impact projects. You will lead by example to be just as passionate about operational performance and predictability as you will be about all other aspects of customer experience. The Successful Candidate Will Be Able To Effectively manage customer expectations and resolve conflicts that balance client and company needs. Develop process to effectively maintain and disseminate project information to stakeholders. Be successful in a delivery focused environment and determining the right processes to make the team successful. This opportunity requires excellent technical, problem solving, and communication skills. The candidate is not just a policy maker/spokesperson but drives to get things done. Possess superior analytical abilities and judgment. Use quantitative and qualitative data to prioritize and influence, show creativity, experimentation and innovation, and drive projects with urgency in this fast-paced environment. Partner with key stakeholders to develop the vision and strategy for customer experience on our platforms. Influence product roadmaps based on this strategy along with your teams. Support the scalable growth of the company by developing and enabling the success of the Operations leadership team. Serve as a role model for Amazon Leadership Principles inside and outside the organization Actively seek to implement and distribute best practices across the operation Basic Qualifications 2+ years of analyzing and interpreting data with Redshift, Oracle, NoSQL etc. experience Experience with data visualization using Tableau, Quicksight, or similar tools Experience with scripting language (e.g., Python, Java, or R) Experience building and maintaining basic data artifacts (e.g., ETL, data models, queries) Experience applying basic statistical methods (e.g. regression) to difficult business problems Experience gathering business requirements, using industry standard business intelligence tool(s) to extract data, formulate metrics and build reports Track record of generating key business insights and collaborating with stakeholders Preferred Qualifications Knowledge of how to improve code quality and optimizes BI processes (e.g. speed, cost, reliability) Knowledge of data modeling and data pipeline design Experience in designing and implementing custom reporting systems using automation tools Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - Amazon Dev Center India - Hyderabad Job ID: A2967546 Show more Show less
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
The job market for redshift professionals in India is growing rapidly as more companies adopt cloud data warehousing solutions. Redshift, a powerful data warehouse service provided by Amazon Web Services, is in high demand due to its scalability, performance, and cost-effectiveness. Job seekers with expertise in redshift can find a plethora of opportunities in various industries across the country.
The average salary range for redshift professionals in India varies based on experience and location. Entry-level positions can expect a salary in the range of INR 6-10 lakhs per annum, while experienced professionals can earn upwards of INR 20 lakhs per annum.
In the field of redshift, a typical career path may include roles such as: - Junior Developer - Data Engineer - Senior Data Engineer - Tech Lead - Data Architect
Apart from expertise in redshift, proficiency in the following skills can be beneficial: - SQL - ETL Tools - Data Modeling - Cloud Computing (AWS) - Python/R Programming
As the demand for redshift professionals continues to rise in India, job seekers should focus on honing their skills and knowledge in this area to stay competitive in the job market. By preparing thoroughly and showcasing their expertise, candidates can secure rewarding opportunities in this fast-growing field. Good luck with your job search!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.