Jobs
Interviews

359 Athena Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

7.0 - 10.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Were looking for a Cloud Architect / Lead to design, build, and manage scalable AWS infrastructure that powers our analytics and data product initiatives. This role focuses on automating infrastructure provisioning, application/API hosting, and enabling data and GenAI workloads through a modern, secure cloud environment. Key Responsibilities ? Design and provision AWS infrastructure using Terraform or AWS CloudFormation to support evolving data product needs. ? Develop and manage CI/CD pipelines using Jenkins, AWS CodePipeline, CodeBuild, or GitHub Actions. ? Deploy and host internal tools, APIs, and applications using ECS, EKS, Lambda, API Gateway, and ELB. ? Provision and support analytics and data platforms using S3, Glue, Redshift, Athena, Lake Formation, and orchestration tools like Step Functions or Apache Airflow (MWAA). ? Implement cloud security, networking, and compliance using IAM, VPC, KMS, CloudWatch, CloudTrail, and AWS Config. ? Collaborate with data engineers, ML engineers, and analytics teams to align infrastructure with application and data product requirements. ? Support GenAI infrastructure, including Amazon Bedrock, SageMaker, or integrations with APIs like OpenAI. Requirements ? 7-10 years of experience in cloud engineering, DevOps, or cloud architecture roles. ? Strong hands-on expertise with the AWS ecosystem and tools listed above. ? Proficiency in scripting (e.g., Python, Bash) and infrastructure automation. ? Experience deploying containerized workloads using Docker, ECS, EKS, or Fargate. ? Familiarity with data engineering and GenAI workflows is a plus. ? AWS certifications (e.g., Solutions Architect, DevOps Engineer) are preferred. Show more Show less

Posted 23 hours ago

Apply

4.0 - 7.0 years

4 - 7 Lacs

Chennai, Tamil Nadu, India

On-site

We are seeking a skilled PySpark Developer with strong working experience in Python programming and a focus on PySpark. You must have hands-on experience with AWS Glue and PySpark, along with strong knowledge of popular libraries such as pandas and numpy. This role requires experience in parallel batch processing, AWS Batch, and Step Functions, as well as knowledge of other AWS services like ECS, ECR, Docker, CloudWatch, and Athena. Roles & Responsibilities: Utilize strong working experience in Python programming to develop data processing solutions. Apply expertise with the PySpark framework for big data processing. Work extensively with AWS Glue and PySpark for building and managing data pipelines. Use popular Python libraries such as pandas, numpy, and joblib to enhance data manipulation and processing. Conduct parallel batch processing with Python to handle large-scale data. Work with AWS Batch and Step Functions for orchestrating and managing data workflows. Utilize knowledge of AWS ECS, ECR, and Docker for containerization and deployment. Leverage knowledge of CloudWatch, Athena , and other AWS services for monitoring, analytics, and data management. Collaborate with data scientists and other engineers to understand requirements and deliver efficient solutions. Troubleshoot and optimize data processing jobs for performance and cost efficiency. Skills Required: Strong working experience in Python programming . Expertise with PySpark framework . Must have experience on AWS Glue and PySpark . Strong experience with using pandas, numpy, joblib , and other popular libraries. Good working experience with parallel batch processing with Python . Good working experience on AWS Batch and Step Functions . Good experience on AWS ECS, ECR, Docker . Good knowledge on CloudWatch, Athena , and other AWS services. Strong problem-solving and analytical skills. Excellent communication and collaboration abilities. QUALIFICATION: Bachelor's degree in Computer Science, Information Technology, or a related field, or equivalent practical experience.

Posted 1 day ago

Apply

10.0 - 14.0 years

0 Lacs

haryana

On-site

As a Digital Product Engineering company, Nagarro is seeking a talented individual to join our dynamic and non-hierarchical work culture as a Data Engineer. With over 17500 experts across 39 countries, we are scaling in a big way and are looking for someone with 10+ years of total experience to contribute to our team. **Requirements:** - The ideal candidate should possess strong working experience in Data Engineering and Big Data platforms. - Hands-on experience with Python and PySpark is required. - Expertise with AWS Glue, including Crawlers and Data Catalog, is essential. - Experience with Snowflake and a strong understanding of AWS services such as S3, Lambda, Athena, SNS, and Secrets Manager are necessary. - Familiarity with Infrastructure-as-Code (IaC) tools like CloudFormation and Terraform is preferred. - Strong experience with CI/CD pipelines, preferably using GitHub Actions, is a plus. - Working knowledge of Agile methodologies, JIRA, and GitHub version control is expected. - Exposure to data quality frameworks, observability, and data governance tools and practices is advantageous. - Excellent communication skills and the ability to collaborate effectively with cross-functional teams are essential for this role. **Responsibilities:** - Writing and reviewing high-quality code to meet technical requirements. - Understanding clients" business use cases and converting them into technical designs. - Identifying and evaluating different solutions to meet clients" requirements. - Defining guidelines and benchmarks for Non-Functional Requirements (NFRs) during project implementation. - Developing design documents explaining the architecture, framework, and high-level design of applications. - Reviewing architecture and design aspects such as extensibility, scalability, security, design patterns, user experience, and NFRs. - Designing overall solutions for defined functional and non-functional requirements and defining technologies, patterns, and frameworks. - Relating technology integration scenarios and applying learnings in projects. - Resolving issues raised during code/review through systematic analysis of the root cause. - Conducting Proof of Concepts (POCs) to ensure suggested designs/technologies meet requirements. **Qualifications:** - Bachelors or master's degree in computer science, Information Technology, or a related field is required. If you are passionate about Data Engineering, experienced in working with Big Data platforms, proficient in Python and PySpark, and have a strong understanding of AWS services and Infrastructure-as-Code tools, we invite you to join Nagarro and be part of our innovative team.,

Posted 1 day ago

Apply

1.0 - 5.0 years

0 Lacs

jaipur, rajasthan

On-site

The Node.js Developer role at Appinop Technologies involves working on developing IOT applications using Node.js and frameworks. You will be responsible for writing reusable, testable, and efficient code, as well as working with RESTful APIs. Experience with serverless framework in AWS using Lambda, API Gateway, and AWS database services like Dynamodb and RDS is required. Additionally, familiarity with other AWS services such as Cognito, AWS ElasticSearch, Kinesis Data Stream, Firehose, Athena, and S3 will be beneficial. Your role will also include designing and implementing low-latency, high-availability, and performant applications, along with ensuring the implementation of security and data protection measures. Requirements And Qualifications: - Previous working experience as a Node.js Developer for 1-2 years. - Bachelor's degree in computer science, information science, or a related field. - Exceptional analytical and problem-solving aptitude. - Extensive knowledge of JavaScript, web stacks, and libraries. - Knowledge of front-end technologies such as HTML5 and CSS3. - Superb interpersonal, communication, and collaboration skills. - Availability to resolve urgent web application issues outside of business hours. About Appinop: Appinop is an organization that encourages innovation and transformation through the collaboration of like-minded individuals. As part of Appinop, you will work with a diverse mix of talented people dedicated to transforming businesses through insights, creativity, and technology. We offer an environment where passionate individuals can grow into proficient professionals and explore new frontiers in software development. At Appinop, we value equality, learning, collaboration, and creative freedom to foster the growth of our employees. If you are looking to work with technical experts in the latest technologies and be part of a culture that promotes personal and professional development, Appinop is the place for you. For more information about our solutions and organization, please visit www.Appinop.com. Skills required: MongoDB, Mongoose, Express,

Posted 2 days ago

Apply

8.0 - 12.0 years

0 Lacs

karnataka

On-site

Sykatiya Technology Pvt Ltd is a leading Semiconductor Industry innovator committed to leveraging cutting-edge technology to solve complex problems. We are currently looking for a highly skilled and motivated Data Scientist to join our dynamic team and contribute to our mission of driving innovation through data-driven insights. As the Lead Data Scientist and Machine Learning Engineer at Sykatiya Technology Pvt Ltd, you will play a crucial role in analyzing large datasets to uncover patterns, develop predictive models, and implement AI/ML solutions. Your responsibilities will include working on projects involving neural networks, deep learning, data mining, and natural language processing (NLP) to drive business value and enhance our products and services. Key Responsibilities: - Lead the design and implementation of machine learning models and algorithms to address complex business problems. - Utilize deep learning techniques to enhance neural network models and enhance prediction accuracy. - Conduct data mining and analysis to extract actionable insights from both structured and unstructured data. - Apply natural language processing (NLP) techniques for advanced text analytics. - Develop and maintain end-to-end data pipelines, ensuring data integrity and reliability. - Collaborate with cross-functional teams to understand business requirements and deliver data-driven solutions. - Mentor and guide junior data scientists and engineers in best practices and advanced techniques. - Stay updated with the latest advancements in AI/ML, neural networks, deep learning, data mining, and NLP. Technical Skills: - Proficiency in Python and its libraries such as NumPy, pandas, sci-kit-learn, TensorFlow, Keras, and PyTorch. - Strong understanding of machine learning algorithms and techniques. - Extensive experience with neural networks and deep learning frameworks. - Hands-on experience with data mining and analysis techniques. - Proficiency in natural language processing (NLP) tools and libraries like NLTK, spaCy, and transformers. - Proficiency in Big Data Technologies including Sqoop, Hadoop, HDFS, Hive, and PySpark. - Experience with Cloud Platforms such as AWS services like S3, Step Functions, EventBridge, Athena, RDS, Lambda, and Glue. - Strong knowledge of Database Management systems like SQL, Teradata, MySQL, PostgreSQL, and Snowflake. - Familiarity with Other Tools like ExactTarget, Marketo, SAP BO, Agile, and JIRA. - Strong Analytical Skills to analyze large datasets and derive actionable insights. - Excellent Problem-Solving Skills with the ability to think critically and creatively. - Effective Communication Skills and teamwork abilities to collaborate with various stakeholders. Experience: - At least 8 to 12 years of experience in a similar role.,

Posted 2 days ago

Apply

8.0 - 10.0 years

0 Lacs

, India

Remote

Job Title: Data Engineer (Remote) Working Hours: 4-hour overlap with EST (9 AM1 PM) Type: Full-Time | Department: Engineering Were hiring skilled Data Engineers to join our remote tech team. You&aposll develop scalable, cloud-based data products and lead small teams to deliver high-impact solutions. Ideal candidates bring deep technical expertise and a passion for innovation. Key Responsibilities: Build and optimize scalable data systems and pipelines Design APIs for data integration Lead a small development team, conduct code reviews, mentor juniors Collaborate with cross-functional teams Contribute to architecture and system design Must-Have Skills: 8+ years in Linux, Bash, Python, SQL 4+ years in Spark, Hadoop ecosystem 4+ years with AWS (EMR, Glue, Athena, Redshift) Team leadership experience Preferred: Experience with dbt, Airflow, Hive, data cataloging tools Knowledge of GCP, scalable pipelines, data partitioning/clustering BS/MS/PhD in CS or equivalent experience Show more Show less

Posted 2 days ago

Apply

5.0 - 9.0 years

0 Lacs

hyderabad, telangana

On-site

As a SQL Developer in our Database department located in Hyderabad, you will be responsible for developing, implementing, and optimizing stored procedures and functions using MySQL / Oracle / Athena. With a minimum of 5 years of experience, you will join our team to analyze existing SQL queries for performance improvements, develop procedures and scripts for data migration, and discuss requirements with stakeholders to develop SQL for reporting purposes. Your main responsibilities will include optimizing large and complicated SQL statements, developing procedures and scripts for deployment or data migrations, coordinating with BI developers to efficiently create reports, troubleshooting production issues and Jasper Report problems within agreed deadlines. Additionally, you will work under sprint tasks and Jira tickets, focus on product enhancement features like dashboards and new report live, collaborate with front-end developers, analyze database architecture for new developments, and prepare documents for upcoming reference work. To excel in this role, you should have strong proficiency with MySQL and its variations among popular databases, be skilled at troubleshooting common database issues, possess experience in report development using complex SQL, and have technical knowledge in ETL, data loading, procedures functions, query optimization, etc. You should also be passionate about creating good design and usability. The ideal candidate for this position would be a B.Tech Graduate with experience in software development and proficiency in MySQL, Presto, ETL, data loaders, and reporting tools. Additionally, soft skills such as MS Word, Excel, PowerPoint, and email management are required. This full-time position will offer a competitive package, performance incentives, mobile allowance, travel benefits, and other industry-standard compensations. If you are passionate about database engineering and have the required qualifications and skills, we encourage you to apply for this position and be a valuable part of our software development team in Hyderabad.,

Posted 3 days ago

Apply

5.0 - 10.0 years

0 - 0 Lacs

pune, maharashtra

On-site

You will be responsible for architecting data warehousing and business intelligence solutions to address cross-functional business challenges. This will involve interacting with business stakeholders to gather requirements and deliver comprehensive Data Engineering, Data Warehousing, and analytics solutions. Additionally, you will collaborate with other technology teams to extract, transform, and load data from diverse sources. You should have a minimum of 5-8 years of end-to-end Data Engineering Development experience, preferably across industries such as Retail, FMCG, Manufacturing, Finance, Oil & Gas. Experience in functional domains like Sales, Procurement, Cost Control, Business Development, and Finance is desirable. You are expected to have 3 to 10 years of experience in data engineering projects using Azure or AWS services, with hands-on expertise in data transformation, processing, and migration using various tools such as Azure Data Lake Storage, Azure Data Factory, Databricks, AWS Glue, Redshift, and Athena. Familiarity with MS Fabric and its components will be advantageous, along with experience in working with different source/target systems like Oracle Database, SQL Server Database, Azure Data Lake Storage, ERP, CRM, and SCM systems. Proficiency in reading data from sources via APIs/Web Services and utilizing APIs to write data to target systems is essential. You should also have experience in Data Cleanup, Data Cleansing, and optimization tasks, including working with non-structured data sets in Azure. Knowledge of analytics tools like Power BI and Azure Analysis Service, as well as exposure to private and public cloud architectures, will be beneficial. Excellent written and verbal communication skills are crucial for this role. Ideally, you hold a degree in M.Tech / B.E. / B.Tech (Computer Science, Information systems, IT) / MCA / MCS. Key requirements include expertise in MS Azure Data Factory, Python, PySpark Coding, Synapse Analytics, Azure Function Apps, Azure Databricks, AWS Glue, Athena, Redshift, and Databricks Pysark. Exposure to integration with various applications/systems like ERP, CRM, SCM, WebApp using APIs, Cloud, On-premise systems, DBs, and file systems is expected. The role necessitates a minimum of 3 Full Cycle Data Engineering Implementations (5-10 years of experience) with a focus on building data warehouses and implementing data models. Exposure to the consulting industry is mandatory, along with strong verbal and written communication skills. Your primary skills should encompass Data Engineering Development, Cloud Engineering with Azure or AWS, Data Warehousing & BI Solutions Architecture, Programming (Python PySpark), Data Integration across various systems, Consulting experience, ETL and Data Transformation, and knowledge in Cloud Architecture. Additionally, familiarity with MS Fabric, handling non-structured data, Data Cleanup and Optimization, API/Web Services, Data Visualization, and industry and functional knowledge will be advantageous. The compensation package ranges from INR 12-28 lpa, subject to the candidate's performance and experience level.,

Posted 3 days ago

Apply

2.0 - 6.0 years

0 Lacs

indore, madhya pradesh

On-site

Golden Eagle IT Technologies Pvt. Ltd. is looking for a skilled Data Engineer with 2 to 4 years of experience to join the team in Indore. The ideal candidate should have a solid background in data engineering, big data technologies, and cloud platforms. As a Data Engineer, you will be responsible for designing, building, and maintaining efficient, scalable, and reliable data pipelines. You will be expected to develop and maintain ETL pipelines using tools like Apache Airflow, Spark, and Hadoop. Additionally, you will design and implement data solutions on AWS, leveraging services such as DynamoDB, Athena, Glue Data Catalog, and SageMaker. Working with messaging systems like Kafka for managing data streaming and real-time data processing will also be part of your responsibilities. Proficiency in Python and Scala for data processing, transformation, and automation is essential. Ensuring data quality and integrity across multiple sources and formats will be a key aspect of your role. Collaboration with data scientists, analysts, and other stakeholders to understand data needs and deliver solutions is crucial. Optimizing and tuning data systems for performance and scalability, as well as implementing best practices for data security and compliance, are also expected. Preferred skills include experience with infrastructure as code tools like Pulumi, familiarity with GraphQL for API development, and exposure to machine learning and data science workflows, particularly using SageMaker. Qualifications for this position include a Bachelor's degree in Computer Science, Information Technology, or a related field, along with 2-4 years of experience in data engineering or a similar role. Proficiency in AWS cloud services and big data technologies, strong programming skills in Python and Scala, knowledge of data warehousing concepts and tools, as well as excellent problem-solving and communication skills are required.,

Posted 3 days ago

Apply

10.0 - 14.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Role - Cloud Architect Analytics & Data Products Were looking for a Cloud Architect / Lead to design, build, and manage scalable AWS infrastructure that powers our analytics and data product initiatives. This role focuses on automating infrastructure provisioning , application/API hosting , and enabling data and GenAI workloads through a modern, secure cloud environment. Key Responsibilities Design and provision AWS infrastructure using Terraform or AWS CloudFormation to support evolving data product needs. Develop and manage CI/CD pipelines using Jenkins , AWS Code Pipeline , Code Build , or GitHub Actions . Deploy and host internal tools, APIs, and applications using ECS , EKS , Lambda , API Gateway , and ELB . Provision and support analytics and data platforms using S3 , Glue , Redshift , Athena , Lake Formation , and orchestration tools like Step Functions or Apache Airflow (MWAA) . Implement cloud security, networking, and compliance using IAM , VPC , KMS , CloudWatch , CloudTrail , and AWS Config . Collaborate with data engineers, ML engineers, and analytics teams to align infrastructure with application and data product requirements. Support GenAI infrastructure, including Amazon Bedrock , Sage Maker , or integrations with APIs like Open AI . Requirements 10-14 years of experience in cloud engineering, DevOps, or cloud architecture roles. Strong hands-on expertise with the AWS ecosystem and tools listed above. Proficiency in scripting (e.g., Python , Bash ) and infrastructure automation. Experience deploying containerized workloads using Docker , ECS , EKS , or Fargate . Familiarity with data engineering and GenAI workflows is a plus. AWS certifications (e.g., Solutions Architect , DevOps Engineer ) are preferred. Show more Show less

Posted 3 days ago

Apply

4.0 - 6.0 years

5 - 9 Lacs

Hyderabad

Work from Office

Are you excited by the prospect of wrangling data, helping develop information systems/sources/tools, and shaping the way businesses make decisionsThe Go-To-Markets Data Analytics team is looking for a skilled Data Engineer who is motivated to deliver top notch data-engineering solutions to support business intelligence, data science, and self-service data solutions. About the Role: In this role as a Data Engineer, you will Design, develop, optimize, and automate data pipelines that blend and transform data across different sources to help drive business intelligence, data science, and self-service data solutions. Work closely with data scientists and data visualization teams to understand data requirements to ensure the availability of high-quality data for analytics, modelling, and reporting. Build pipelines that source, transform, and load data thats both structured and unstructured keeping in mind data security and access controls. Explore large volumes of data with curiosity and conviction. Contribute to the strategy and architecture of data management systems and solutions. Proactively troubleshoot and resolve data-related and performance bottlenecks in a timely manner. Be open to learning and working on emerging technologies in the data engineering, data science and cloud computing space. Have the curiosity to interrogate data, conduct independent research, utilize various techniques, and tackle ambiguous problems. Shift Timings12 PM to 9 PM (IST) Work from office for 2 days in a week (Mandatory) About You Youre a fit for the role of Data Engineer, if your background includes Must have at least 4-6 years of total work experience with at least 2+ years in data engineering or analytics domains. Graduates in data analytics, data science, computer science, software engineering or other data centric disciplines. SQL Proficiency a must. Experience with data pipeline and transformation tools such as dbt, Glue, FiveTran, Alteryx or similar solutions. Experience using cloud-based data warehouse solutions such as Snowflake, Redshift, Azure. Experience with orchestration tools like Airflow or Dagster. Preferred experience using Amazon Web Services (S3, Glue, Athena, Quick sight). Data modelling knowledge of various schemas like snowflake and star. Has built data pipelines and other custom automated solutions to speed the ingestion, analysis, and visualization of large volumes of data. Knowledge building ETL workflows, database design, and query optimization. Has experience of a scripting language like Python. Works well within a team and collaborates with colleagues across domains and geographies. Excellent oral, written, and visual communication skills. Has a demonstrable ability to assimilate new information thoroughly and quickly. Strong logical and scientific approach to problem-solving. Can articulate complex results in a simple and concise manner to all levels within the organization. #LI-GS2 Whats in it For You Hybrid Work Model Weve adopted a flexible hybrid working environment (2-3 days a week in the office depending on the role) for our office-based roles while delivering a seamless experience that is digitally and physically connected. Flexibility & Work-Life Balance: Flex My Way is a set of supportive workplace policies designed to help manage personal and professional responsibilities, whether caring for family, giving back to the community, or finding time to refresh and reset. This builds upon our flexible work arrangements, including work from anywhere for up to 8 weeks per year, empowering employees to achieve a better work-life balance. Career Development and Growth: By fostering a culture of continuous learning and skill development, we prepare our talent to tackle tomorrows challenges and deliver real-world solutions. Our Grow My Way programming and skills-first approach ensures you have the tools and knowledge to grow, lead, and thrive in an AI-enabled future. Industry Competitive Benefits We offer comprehensive benefit plans to include flexible vacation, two company-wide Mental Health Days off, access to the Headspace app, retirement savings, tuition reimbursement, employee incentive programs, and resources for mental, physical, and financial wellbeing. Culture: Globally recognized, award-winning reputation for inclusion and belonging, flexibility, work-life balance, and more. We live by our valuesObsess over our Customers, Compete to Win, Challenge (Y)our Thinking, Act Fast / Learn Fast, and Stronger Together. Social Impact Make an impact in your community with our Social Impact Institute. We offer employees two paid volunteer days off annually and opportunities to get involved with pro-bono consulting projects and Environmental, Social, and Governance (ESG) initiatives. Making a Real-World Impact: We are one of the few companies globally that helps its customers pursue justice, truth, and transparency. Together, with the professionals and institutions we serve, we help uphold the rule of law, turn the wheels of commerce, catch bad actors, report the facts, and provide trusted, unbiased information to people all over the world. Thomson Reuters informs the way forward by bringing together the trusted content and technology that people and organizations need to make the right decisions. We serve professionals across legal, tax, accounting, compliance, government, and media. Our products combine highly specialized software and insights to empower professionals with the data, intelligence, and solutions needed to make informed decisions, and to help institutions in their pursuit of justice, truth, and transparency. Reuters, part of Thomson Reuters, is a world leading provider of trusted journalism and news. We are powered by the talents of 26,000 employees across more than 70 countries, where everyone has a chance to contribute and grow professionally in flexible work environments. At a time when objectivity, accuracy, fairness, and transparency are under attack, we consider it our duty to pursue them. Sound excitingJoin us and help shape the industries that move society forward. As a global business, we rely on the unique backgrounds, perspectives, and experiences of all employees to deliver on our business goals. To ensure we can do that, we seek talented, qualified employees in all our operations around the world regardless of race, color, sex/gender, including pregnancy, gender identity and expression, national origin, religion, sexual orientation, disability, age, marital status, citizen status, veteran status, or any other protected classification under applicable law. Thomson Reuters is proud to be an Equal Employment Opportunity Employer providing a drug-free workplace. We also make reasonable accommodations for qualified individuals with disabilities and for sincerely held religious beliefs in accordance with applicable law. More information on requesting an accommodation here. Learn more on how to protect yourself from fraudulent job postings here. More information about Thomson Reuters can be found on thomsonreuters.com.

Posted 4 days ago

Apply

6.0 - 7.0 years

6 - 10 Lacs

Hyderabad

Work from Office

Are you excited by the prospect of wrangling data, helping develop information systems/sources/tools, and shaping the way businesses make decisionsThe Go-To-Markets Data Analytics team is looking for a skilled Senior Data Engineer who is motivated to deliver top notch data-engineering solutions to support business intelligence, data science, and self-service data solutions. About the Role: In this role as a Senior Data Engineer, you will Design, develop, optimize, and automate data pipelines that blend and transform data across different sources to help drive business intelligence, data science, and self-service data solutions. Work closely with data scientists and data visualization teams to understand data requirements to ensure the availability of high-quality data for analytics, modelling, and reporting. Build pipelines that source, transform, and load data thats both structured and unstructured keeping in mind data security and access controls. Explore large volumes of data with curiosity and conviction. Contribute to the strategy and architecture of data management systems and solutions. Proactively troubleshoot and resolve data-related and performance bottlenecks in a timely manner. Be open to learning and working on emerging technologies in the data engineering, data science and cloud computing space. Have the curiosity to interrogate data, conduct independent research, utilize various techniques, and tackle ambiguous problems. Shift Timings12 PM to 9 PM (IST) Work from office for 2 days in a week (Mandatory) About You Youre a fit for the role of Senior Data Engineer, if your background includes Must have at least 6-7 years of total work experience with at least 3+ years in data engineering or analytics domains. Graduates in data analytics, data science, computer science, software engineering or other data centric disciplines. SQL Proficiency a must. Experience with data pipeline and transformation tools such as dbt, Glue, FiveTran, Alteryx or similar solutions. Experience using cloud-based data warehouse solutions such as Snowflake, Redshift, Azure. Experience with orchestration tools like Airflow or Dagster. Preferred experience using Amazon Web Services (S3, Glue, Athena, Quick sight). Data modelling knowledge of various schemas like snowflake and star. Has built data pipelines and other custom automated solutions to speed the ingestion, analysis, and visualization of large volumes of data. Knowledge building ETL workflows, database design, and query optimization. Has experience of a scripting language like Python. Works well within a team and collaborates with colleagues across domains and geographies. Excellent oral, written, and visual communication skills. Has a demonstrable ability to assimilate new information thoroughly and quickly. Strong logical and scientific approach to problem-solving. Can articulate complex results in a simple and concise manner to all levels within the organization. #LI-GS2 Whats in it For You Hybrid Work Model Weve adopted a flexible hybrid working environment (2-3 days a week in the office depending on the role) for our office-based roles while delivering a seamless experience that is digitally and physically connected. Flexibility & Work-Life Balance: Flex My Way is a set of supportive workplace policies designed to help manage personal and professional responsibilities, whether caring for family, giving back to the community, or finding time to refresh and reset. This builds upon our flexible work arrangements, including work from anywhere for up to 8 weeks per year, empowering employees to achieve a better work-life balance. Career Development and Growth: By fostering a culture of continuous learning and skill development, we prepare our talent to tackle tomorrows challenges and deliver real-world solutions. Our Grow My Way programming and skills-first approach ensures you have the tools and knowledge to grow, lead, and thrive in an AI-enabled future. Industry Competitive Benefits We offer comprehensive benefit plans to include flexible vacation, two company-wide Mental Health Days off, access to the Headspace app, retirement savings, tuition reimbursement, employee incentive programs, and resources for mental, physical, and financial wellbeing. Culture: Globally recognized, award-winning reputation for inclusion and belonging, flexibility, work-life balance, and more. We live by our valuesObsess over our Customers, Compete to Win, Challenge (Y)our Thinking, Act Fast / Learn Fast, and Stronger Together. Social Impact Make an impact in your community with our Social Impact Institute. We offer employees two paid volunteer days off annually and opportunities to get involved with pro-bono consulting projects and Environmental, Social, and Governance (ESG) initiatives. Making a Real-World Impact: We are one of the few companies globally that helps its customers pursue justice, truth, and transparency. Together, with the professionals and institutions we serve, we help uphold the rule of law, turn the wheels of commerce, catch bad actors, report the facts, and provide trusted, unbiased information to people all over the world. Thomson Reuters informs the way forward by bringing together the trusted content and technology that people and organizations need to make the right decisions. We serve professionals across legal, tax, accounting, compliance, government, and media. Our products combine highly specialized software and insights to empower professionals with the data, intelligence, and solutions needed to make informed decisions, and to help institutions in their pursuit of justice, truth, and transparency. Reuters, part of Thomson Reuters, is a world leading provider of trusted journalism and news. We are powered by the talents of 26,000 employees across more than 70 countries, where everyone has a chance to contribute and grow professionally in flexible work environments. At a time when objectivity, accuracy, fairness, and transparency are under attack, we consider it our duty to pursue them. Sound excitingJoin us and help shape the industries that move society forward. As a global business, we rely on the unique backgrounds, perspectives, and experiences of all employees to deliver on our business goals. To ensure we can do that, we seek talented, qualified employees in all our operations around the world regardless of race, color, sex/gender, including pregnancy, gender identity and expression, national origin, religion, sexual orientation, disability, age, marital status, citizen status, veteran status, or any other protected classification under applicable law. Thomson Reuters is proud to be an Equal Employment Opportunity Employer providing a drug-free workplace. We also make reasonable accommodations for qualified individuals with disabilities and for sincerely held religious beliefs in accordance with applicable law. More information on requesting an accommodation here. Learn more on how to protect yourself from fraudulent job postings here. More information about Thomson Reuters can be found on thomsonreuters.com.

Posted 4 days ago

Apply

3.0 - 5.0 years

40 - 45 Lacs

Hyderabad

Work from Office

Key Responsibilities: Build, maintain, update, and manage complex Tableau dashboards to provide actionable insights to business stakeholders. Perform ad-hoc data analysis using Python, SQL, and basic AWS Cloud skills. Collaborate with peer data analysts to collectively manage and optimize dashboards critical to business operations. Work closely with stakeholders to identify KPIs, develop reports, and ensure alignment with VG standards and leading practices. Analyse data from diverse sources to provide insights that drive decision-making processes. Navigate ambiguity and proactively design solutions that meet stakeholder needs while adhering to organizational goals. Mandatory Technical Skills: Proficiency in Tableau for creating and managing dashboards. Strong skills in SQL for querying and data extraction. Working knowledge of Python for data manipulation and analysis. Basic understanding of AWS Cloud concepts and tools to perform cloud-based data analysis. Preferred Skills: Familiarity with AWS services like S3, Redshift, or Athena is a plus. Experience in developing and maintaining KPI reports adhering to business standards. Strong problem-solving skills with the ability to work independently and collaboratively. Excellent communication and stakeholder management abilities.

Posted 4 days ago

Apply

2.0 - 3.0 years

5 - 15 Lacs

Hyderabad, Telangana, India

On-site

About the Role: In this role as a Data Engineer,you will: Design, develop, optimize, and automate data pipelines that blend and transform data across different sources to help drive business intelligence, data science, and self-service data solutions. Work closely with data scientists and data visualization teams to understand data requirements to ensure the availability of high-quality data for analytics, modelling, and reporting. Build pipelines that source, transform, and load data that s both structured and unstructured keeping in mind data security and access controls. Explore large volumes of data with curiosity and conviction. Contribute to the strategy and architecture of data management systems and solutions. Proactively troubleshoot and resolve data-related and performance bottlenecks in a timely manner. Be open to learning and working on emerging technologies in the data engineering, data science and cloud computing space. Have the curiosity to interrogate data, conduct independent research, utilize various techniques, and tackle ambiguous problems. Shift Timings: 12 PM to 9 PM (IST) Work from office for 2 days in a week (Mandatory) About You You re a fit for the role of Data Engineer,ifyour background includes: Must have at least 4-6 years of total work experience with at least 2+ years in data engineering or analytics domains. Graduates in data analytics, data science, computer science, software engineering or other data centric disciplines. SQL Proficiency a must. Experience with data pipeline and transformation tools such as dbt, Glue, FiveTran, Alteryx or similar solutions. Experience using cloud-based data warehouse solutions such as Snowflake, Redshift, Azure. Experience with orchestration tools like Airflow or Dagster. Preferred experience using Amazon Web Services (S3, Glue, Athena, Quick sight). Data modelling knowledge of various schemas like snowflake and star. Has built data pipelines and other custom automated solutions to speed the ingestion, analysis, and visualization of large volumes of data. Knowledge building ETL workflows, database design, and query optimization. Has experience of a scripting language like Python. Works well within a team and collaborates with colleagues across domains and geographies. Excellent oral, written, and visual communication skills. Has a demonstrable ability to assimilate new information thoroughly and quickly. Strong logical and scientific approach to problem-solving. Can articulate complex results in a simple and concise manner to all levels within the organization.

Posted 4 days ago

Apply

3.0 - 7.0 years

0 Lacs

chennai, tamil nadu

On-site

Join GlobalLogic and become a valuable part of the team working on a significant software project for a world-class company that provides M2M / IoT 4G/5G modules to industries such as automotive, healthcare, and logistics. As part of our engagement, you will assist in developing end-user modules firmware, implementing new features, maintaining compatibility with the latest telecommunication and industry standards, and conducting analysis and estimations of customer requirements. Your responsibilities will include: - Hands-on experience in Cloud Deployment using Terraform - Proficiency in Branching, Merging, Tagging, and maintaining versions across environments using GIT & Jenkins pipelines - Ability to work on Continuous Integration (CI) and End to End automation for all builds and deployments - Experience in impending Continuous Delivery (CD) pipelines - Hands-on experience with all the AWS Services mentioned in Primary Skillset - Strong verbal and written communication skills Primary Skillset: IAM, EC2, ELB, EBS, AMI, Route53, Security Groups, AutoScaling, S3 Secondary Skillset: EKS, Terraform, Cloudwatch, SNS, SQS, Athena At GlobalLogic, you will have the opportunity to work on exciting projects in industries such as High-Tech, communication, media, healthcare, retail, and telecom. You will collaborate with a diverse team of talented individuals in an open, laidback environment and may even have the chance to work in one of our global centers or client facilities. We prioritize work-life balance by offering flexible work schedules, work-from-home options, paid time off, and holidays. Our dedicated Learning & Development team provides opportunities for professional development through communication skills training, stress management programs, professional certifications, and technical and soft skill trainings. In addition to competitive salaries, we offer family medical insurance, Group Term Life Insurance, Group Personal Accident Insurance, NPS (National Pension Scheme), extended maternity leave, annual performance bonuses, and referral bonuses. Our fun perks include sports events, cultural activities, food at subsidized rates, corporate parties, and discounts at popular stores and restaurants. GlobalLogic is a leader in digital engineering, helping brands worldwide design and build innovative products, platforms, and digital experiences. We operate around the world, delivering deep expertise to customers in various industries. As a Hitachi Group Company, we contribute to driving innovation through data and technology to create a sustainable society with a higher quality of life.,

Posted 4 days ago

Apply

4.0 - 8.0 years

0 Lacs

noida, uttar pradesh

On-site

We are seeking a driven individual with a strong financial knowledge and an analytical mindset. As a motivated team player, you will excel in maintaining efficiency and accuracy while multitasking. To be a strong candidate for this role, your experience in financial services and proven understanding of products will be crucial. Additionally, you should be a strong written and verbal communicator to effectively interact with CSU/Field RPs. In this role, you will be responsible for working with Surveillance internal teams and business partners to define and document business requirements. Engaging with Business counterparts to ensure solutions align with business requirements and readiness levels. You will translate business requirements into actionable solutions and deliver on complex ad-hoc business analysis requests. Furthermore, you will coordinate and prioritize business needs in a matrix management environment, documenting and communicating results and recommendations to both external and internal teams. The ideal candidate should possess 4-6 years of experience in the analytics industry with a strong background in Financial Services. You should have excellent quantitative, analytical, programming, and problem-solving skills. Proficiency in MS Excel, PowerPoint, and Word is essential. A highly motivated self-starter with exceptional communication skills is desired, along with the ability to work effectively in a team environment on multiple projects. Candidates should be willing to learn tools like Python, SQL, PowerApps & PowerBI. Series 7 or SIE certification is preferred. Experience with AWS Infrastructure and knowledge of tools like SageMaker and Athena are advantageous. Ameriprise India LLP has been a trusted provider of client-based financial solutions for 125 years. As a U.S.-based financial planning company, headquartered in Minneapolis with a global presence, our focus areas include Asset Management and Advice, Retirement Planning, and Insurance Protection. Join our inclusive and collaborative culture that values your contributions and offers opportunities for growth and development. If you are talented, driven, and seeking to work for an ethical company that cares, take the next step and build your career at Ameriprise India LLP. This is a full-time position with working hours from 2:00 pm to 10:30 pm. The role is part of the AWMPO AWMP&S President's Office within the Legal Affairs job family group.,

Posted 4 days ago

Apply

6.0 - 10.0 years

20 - 35 Lacs

Pune, Delhi / NCR

Hybrid

Job Description Responsibilities Data Architecture: Develop and maintain the overall data architecture, ensuring scalability, performance, and data quality. AWS Data Services: Expertise in using AWS data services such as AWS Glue, S3, SNS, SES, Dynamo DB, Redshift, Cloud formation, Cloud watch, IAM, DMS, Event bridge scheduler etc. Data Warehousing: Design and implement data warehouses on AWS, leveraging AWS Redshift or other suitable options. Data Lakes: Build and manage data lakes on AWS using AWS S3 and other relevant services. Data Pipelines: Design and develop efficient data pipelines to extract, transform, and load data from various sources. Data Quality: Implement data quality frameworks and best practices to ensure data accuracy, completeness, and consistency. Cloud Optimization: Optimize data engineering solutions for performance, cost-efficiency, and scalability on the AWS cloud. Qualifications Bachelors degree in computer science, Engineering, or a related field. 6-7 years of experience in data engineering roles, with a focus on AWS cloud platforms. Strong understanding of data warehousing and data lake concepts. Proficiency in SQL and at least one programming language ( Python/Pyspark ). Good to have - Experience with any big data technologies like Hadoop, Spark, and Kafka. Knowledge of data modeling and data quality best practices. Excellent problem-solving, analytical, and communication skills. Ability to work independently and as part of a team. Preferred Qualifications AWS data developers with 6-10 years experience certified candidates (AWS data engineer associate or AWS solution architect) are preferred Skills required - SQL, AWS Glue, PySpark, Air Flow, CDK, Red shift Good communication skills and can deliver independently

Posted 5 days ago

Apply

2.0 - 7.0 years

15 - 20 Lacs

Hyderabad

Work from Office

Job Area: Engineering Group, Engineering Group > Software Engineering General Summary: As a leading technology innovator, Qualcomm pushes the boundaries of what's possible to enable next-generation experiences and drives digital transformation to help create a smarter, connected future for all. As a Qualcomm Software Engineer, you will design, develop, create, modify, and validate embedded and cloud edge software, applications, and/or specialized utility programs that launch cutting-edge, world class products that meet and exceed customer needs. Qualcomm Software Engineers collaborate with systems, hardware, architecture, test engineers, and other teams to design system-level software solutions and obtain information on performance requirements and interfaces. Minimum Qualifications: Bachelor's degree in Engineering, Information Systems, Computer Science, or related field and 2+ years of Software Engineering or related work experience. OR Master's degree in Engineering, Information Systems, Computer Science, or related field and 1+ year of Software Engineering or related work experience. OR PhD in Engineering, Information Systems, Computer Science, or related field. 2+ years of academic or work experience with Programming Language such as C, C++, Java, Python, etc. General Summary Preferred Qualifications 3+ years of experience as a Data Engineer or in a similar role Experience with data modeling, data warehousing, and building ETL pipelines Solid working experience with Python, AWS analytical technologies and related resources (Glue, Athena, QuickSight, SageMaker, etc.,) Experience with Big Data tools , platforms and architecture with solid working experience with SQL Experience working in a very large data warehousing environment, Distributed System. Solid understanding on various data exchange formats and complexities Industry experience in software development, data engineering, business intelligence, data science, or related field with a track record of manipulating, processing, and extracting value from large datasets Strong data visualization skills Basic understanding of Machine Learning; Prior experience in ML Engineering a plus Ability to manage on-premises data and make it inter-operate with AWS based pipelines Ability to interface with Wireless Systems/SW engineers and understand the Wireless ML domain; Prior experience in Wireless (5G) domain a plus Education Bachelor's degree in computer science, engineering, mathematics, or a related technical discipline Preferred QualificationsMasters in CS/ECE with a Data Science / ML Specialization Minimum Qualifications: Bachelor's degree in Engineering, Information Systems, Computer Science, or related field and 3+ years of Software Engineering or related work experience. OR Master's degree in Engineering, Information Systems, Computer Science, or related field OR PhD in Engineering, Information Systems, Computer Science, or related field. 3+ years of experience with Programming Language such as C, C++, Java, Python, etc. Develops, creates, and modifies general computer applications software or specialized utility programs. Analyzes user needs and develops software solutions. Designs software or customizes software for client use with the aim of optimizing operational efficiency. May analyze and design databases within an application area, working individually or coordinating database development as part of a team. Modifies existing software to correct errors, allow it to adapt to new hardware, or to improve its performance. Analyzes user needs and software requirements to determine feasibility of design within time and cost constraints. Confers with systems analysts, engineers, programmers and others to design system and to obtain information on project limitations and capabilities, performance requirements and interfaces. Stores, retrieves, and manipulates data for analysis of system capabilities and requirements. Designs, develops, and modifies software systems, using scientific analysis and mathematical models to predict and measure outcome and consequences of design. Principal Duties and Responsibilities: Completes assigned coding tasks to specifications on time without significant errors or bugs. Adapts to changes and setbacks in order to manage pressure and meet deadlines. Collaborates with others inside project team to accomplish project objectives. Communicates with project lead to provide status and information about impending obstacles. Quickly resolves complex software issues and bugs. Gathers, integrates, and interprets information specific to a module or sub-block of code from a variety of sources in order to troubleshoot issues and find solutions. Seeks others' opinions and shares own opinions with others about ways in which a problem can be addressed differently. Participates in technical conversations with tech leads/managers. Anticipates and communicates issues with project team to maintain open communication. Makes decisions based on incomplete or changing specifications and obtains adequate resources needed to complete assigned tasks. Prioritizes project deadlines and deliverables with minimal supervision. Resolves straightforward technical issues and escalates more complex technical issues to an appropriate party (e.g., project lead, colleagues). Writes readable code for large features or significant bug fixes to support collaboration with other engineers. Determines which work tasks are most important for self and junior engineers, stays focused, and deals with setbacks in a timely manner. Unit tests own code to verify the stability and functionality of a feature. Applicants Qualcomm is an equal opportunity employer. If you are an individual with a disability and need an accommodation during the application/hiring process, rest assured that Qualcomm is committed to providing an accessible process. You may e-mail disability-accomodations@qualcomm.com or call Qualcomm's toll-free number found here. Upon request, Qualcomm will provide reasonable accommodations to support individuals with disabilities to be able participate in the hiring process. Qualcomm is also committed to making our workplace accessible for individuals with disabilities. (Keep in mind that this email address is used to provide reasonable accommodations for individuals with disabilities. We will not respond here to requests for updates on applications or resume inquiries). Qualcomm expects its employees to abide by all applicable policies and procedures, including but not limited to security and other requirements regarding protection of Company confidential information and other confidential and/or proprietary information, to the extent those requirements are permissible under applicable law. To all Staffing and Recruiting Agencies Please do not forward resumes to our jobs alias, Qualcomm employees or any other company location. Qualcomm is not responsible for any fees related to unsolicited resumes/applications. If you would like more information about this role, please contact Qualcomm Careers.

Posted 5 days ago

Apply

3.0 - 8.0 years

4 - 8 Lacs

Pune

Work from Office

Required Skills and Competencies: - Experience: 3+ Years. Expertise in Python Language is MUST. SQL (should be able to write complex SQL Queries) is MUST Hands on experience in Apache Flink Streaming Or Spark Streaming MUST Hands On expertise in Apache Kafka experience is MUST Data Lake Development experience. Orchestration (Apache Airflow is preferred). Spark and Hive: Optimization of Spark/PySpark and Hive apps Trino/(AWS Athena) (Good to have) Snowflake (good to have). Data Quality (good to have). File Storage (S3 is good to have).

Posted 5 days ago

Apply

5.0 - 8.0 years

15 - 25 Lacs

Bengaluru

Work from Office

Job Title :AWS Data Engineer Location :Bangalore Notice Period : Immediate to 60 Days Preferred. Job Description: We are seeking skilled and dynamic Cloud Data Engineers specializing in AWS, Databricks. The ideal candidate will have a strong background in data engineering, with a focus on data ingestion, transformation, and warehousing. They should also possess excellent knowledge of PySpark or Spark, and a proven ability to optimize performance in Spark job executions. Key Responsibilities: - Design, build, and maintain scalable data pipelines for a variety of cloud platforms including AWS. - Implement data ingestion and transformation processes to facilitate efficient data warehousing. - Utilize cloud services to enhance data processing capabilities: - AWS: Glue, Athena, Lambda, Redshift, Step Functions, DynamoDB, SNS. . - Optimize Spark job performance to ensure high efficiency and reliability. - Stay proactive in learning and implementing new technologies to improve data processing frameworks. - Collaborate with cross-functional teams to deliver robust data solutions. - Work on Spark Streaming for real-time data processing as necessary. Qualifications: - 5-8 years of experience in data engineering with a strong focus on cloud environments. - Proficiency in PySpark or Spark is mandatory. - Proven experience with data ingestion, transformation, and data warehousing. - In-depth knowledge and hands-on experience with cloud services(AWS): - Demonstrated ability in performance optimization of Spark jobs. - Strong problem-solving skills and the ability to work independently as well as in a team. - Cloud Certification (AWS) is a plus. - Familiarity with Spark Streaming is a bonus.

Posted 5 days ago

Apply

5.0 - 8.0 years

15 - 25 Lacs

Pune

Hybrid

So, what’s the role all about? We are seeking a highly skilled Backend Software Engineer to join the GenAI Solutions for CX , our fully integrated AI cloud customer experience platform. On this role you will get the exposure to new and exciting technologies and collaborate with professional engineers, architects, and product managers to create NICE’s advanced line of AI cloud products How will you make an impact? Design and implement high-performance microservices using AWS cloud technologies Build scalable backend systems using Python Lead the development of event-driven architectures utilizing Kafka and AWS Firehose Integrate with Athena , DynamoDB , S3 , and other AWS services to deliver end-to-end solutions Ensure high-quality deliverables with testable, reusable, and production-ready code Collaborate within an agile team, influencing architecture, design, and technology adoption Have you got what it takes? 5 + years of backend software development experience Strong expertise in Python /C# Deep knowledge of microservices architecture , RESTful APIs , and cloud-native development Hands -on experience with AWS Lambda , S3 , Athena , Kinesis Firehose , and Kafka Strong database skills (SQL & NoSQL), including schema design and performance tuning Experience designing scalable systems and delivering enterprise-grade software Comfortable working with CI/CD pipelines and DevOps practices Passion for clean code, best practices, and continuous improvement Excellent communication and collaboration abilities Fluent in English (written and spoken) What’s in it for you? Join an ever-growing, market-disrupting global company where the teams – comprised of the best of the best – work in a fast-paced, collaborative, and creative environment! As the market leader, every day at NiCE is a chance to learn and grow, and there are endless internal career opportunities across multiple roles, disciplines, domains, and locations. If you are passionate, innovative, and excited to constantly raise the bar, you may just be our next NiCEr! Enjoy FLEX! At NiCE, we work according to the NiCE-FLEX hybrid model, which enables maximum flexibility: 2 days working from the office and 3 days of remote work, each week. Naturally, office days focus on face-to-face meetings, where teamwork and collaborative thinking generate innovation, new ideas, and a vibrant, interactive atmosphere. Requisition ID: 7981 Reporting into: Tech Manager Role Type: Individual Contributor

Posted 5 days ago

Apply

4.0 - 9.0 years

4 - 8 Lacs

Pune

Work from Office

Experience: 4+ Years. Expertise in Python Language is MUST. SQL (should be able to write complex SQL Queries) is MUST Hands on experience in Apache Flink Streaming Or Spark Streaming MUST Hands On expertise in Apache Kafka experience is MUST Data Lake Development experience. Orchestration (Apache Airflow is preferred). Spark and Hive: Optimization of Spark/PySpark and Hive apps Trino/(AWS Athena) (Good to have) Snowflake (good to have). Data Quality (good to have). File Storage (S3 is good to have) Our Offering:- Global cutting-edge IT projects that shape the future of digital and have a positive impact on environment. Wellbeing programs & work-life balance - integration and passion sharing events. Attractive Salary and Company Initiative Benefits Courses and conferences. Attractive Salary. Hybrid work culture.

Posted 5 days ago

Apply

5.0 - 10.0 years

20 - 25 Lacs

Bengaluru

Hybrid

Company Description Epsilon is an all-encompassing global marketing innovator, supporting 15 of the top 20 global brands. We provide unrivaled data intelligence and customer insights, world-class technology including loyalty, email and CRM platforms and data-driven creative, activation and execution. Epsilon's digital media arm, Conversant, is a leader in personalized digital advertising and insights through its proprietary technology and trove of consumer marketing data, delivering digital marketing with unprecedented scale, accuracy and reach through personalized media programs and through CJ Affiliate by Conversant, one of the world's largest affiliate marketing networks. Together, we bring personalized marketing to consumers across offline and online channels, at moments of interest, that help drive business growth for brands. Recognized by Ad Age as the #1 World's Largest CRM/Direct Marketing Agency Network, #1 Largest U.S. Agency from All Disciplines, #1 Largest U.S. CRM/Direct Marketing Agency Network and #1 Largest U.S. Mobile Marketing Agency, Epsilon employs over 8,000 associates in 70 offices worldwide. Epsilon is part of Alliance Data, a Fortune 500's and Fortune 100 Best Places to Work For a company. For more information, visit www.epsilon.com and follow us on Twitter @EpsilonMktg. Job Description About BU The Product team forms the crux of our powerful platforms and connects millions of customers to the product magic. This team of innovative thinkers develop and build products that help Epsilon be a market differentiator. They map the future and set new standards for our products, empowered with industry best practices, ML and AI capabilities. The team passionately delivers intelligent end-to-end solutions and plays a key role in Epsilon's success story. Why we are looking for you We are looking for Senior Software Engineer to work on groundbreaking multichannel SaaS Digital Marketing Platform that focuses on uniquely identify the customer's patterns, effectively interact with them across channels and achieve a positive return on marketing investment (ROMI). The platform helps consolidate and integrates the features and functionality typically found in stand-alone services and channel-specific messaging platforms to give marketers a tightly integrated, easily orchestrated, insights-driven, cross channel marketing capability. Primary role of the Senior Software Engineer is to envision and build internet scale services on Cloud using Java and distributed technologies with 60-40 involvement in backend development with Java and frontend development using Angular. Responsible for development and maintenance of applications with technologies involving Java and Distributed technologies. Collaborate with developers, product manager, business analysts and business users in conceptualizing, estimating and developing new software applications and enhancements. Assist in the development, and documentation of softwares objectives, deliverables, and specifications in collaboration with internal users and departments. Collaborate with QA team to define test cases, metrics, and resolve questions about test results. Assist in the design and implementation process for new products, research and create POC for possible solutions. Develop components based on business and/or application requirements Create unit tests in accordance with team policies & procedures Advise, and mentor team members in specialized technical areas as well as fulfill administrative duties as defined by support process Create Value-adds that would contribute to Cost Optimizations/ Scalability/ Reliability/Secure solutions What you will enjoy in this role Tech Stack: Our integrated suite of modular products is designed to help deliver personalized experiences and drive meaningful outcomes. Our tech stack caters to a fusion of data and technology with SaaS offerings developed as a Cloud-first approach. Here, a solid understanding of software security practices including user authentication and authorization and being data-savvy would be key. You should also come with the ability to leverage best practices in design patterns, and design algorithms for software development that focus on high quality and agility. You must also have a good understanding of Agile Methodologies like SCRUM. You can refer this article also. What you will do Be responsible for development and maintenance of applications with technologies involving Java and Distributed technologies. Collaborate with developers, product manager, business analysts and business users in conceptualizing, estimating and developing new software applications and enhancements. Assist in the development, and documentation of softwares objectives, deliverables, and specifications in collaboration with internal users and departments. Collaborate with QA team to define test cases, metrics, and resolve questions about test results. Assist in the design and implementation process for new products, research and create POC for possible solutions. Develop components based on business and/or application requirements Create unit tests in accordance with team policies & procedures Advise, and mentor team members in specialized technical areas as well as fulfill administrative duties as defined by support process Create Value-adds that would contribute to Cost Optimizations/ Scalability/ Reliability/Secure solutions Qualifications Bachelor's degree or equivalent in computer science 6+ years of experience in Java/Angular/SQL/ AWS/Microservices Preferred knowledge/experience in the following technologies 2 + years of UI Technologies like Angular 2 or > 1 + year of experience in Cloud computing like AWS or Azure or GCP or PCF or OCI Experience in following Tools: Eclipse, Maven, Gradle, DB tools, Bitbucket/JIRA/Confluence Can develop SOA services and good knowledge of REST API and Micro service architectures Solid knowledge of web architectural and design patterns Understands software security practices including user authentication and authorization, data validation and an understanding of common DOS and SQL injection techniques. Familiar with profiling, code coverage, logging, common IDE's and other development tools. Familiar with Agile Methodologies SCRUM and Strong communication skills (verbal and written) Ability to work within tight deadlines and effectively prioritize and execute tasks in a high-pressure environment. Demonstrated verbal and written communication skills, and ability to interface with Business, Analytics and IT organizations Ability to work effectively in short-cycle, team oriented environment, managing multiple priorities and tasks Ability to identify non-obvious solutions to complex problems

Posted 6 days ago

Apply

9.0 - 13.0 years

0 Lacs

navi mumbai, maharashtra

On-site

The structured finance analytics team at Morningstar DBRS in Mumbai is seeking a Manager who will lead a team of Quant Analysts in automating data analysis processes, developing data analytics, and enhancing workflow optimization tools to support the rating, research, and surveillance process. The ideal candidate should have a strong understanding of Structured Finance products (RMBS, ABS, CMBS) and possess technical skills in Python, Tableau, AWS, Athena, SQL, and VBA. This role requires expertise in managing a team, fostering a collaborative work environment, and prioritizing work schedules to ensure timely and quality delivery. The Manager will also be responsible for maintaining communication with global teams, transforming data, implementing quick fix solutions, and ensuring compliance with regulatory and company policies. The successful candidate should have 9-11 years of experience in Credit Modeling/Model Validation roles, hold a qualification in MBA (Finance)/BTech/PHD (Math) from a Tier I college, and demonstrate strong analytical skills with experience in working with large databases/datasets. Proficiency in Python, Tableau, Microsoft Excel, MSSQL, and familiarity with AWS infrastructure will be essential for this role. The Manager should be highly organized, efficient, and capable of multitasking to meet tight deadlines while ensuring high-quality deliverables. Morningstar DBRS is an equal opportunity employer committed to empowering investor success and driving innovation in the credit ratings industry.,

Posted 6 days ago

Apply

7.0 - 11.0 years

0 Lacs

haryana

On-site

The ideal candidate for this position should have previous experience in building data science/algorithms based products, which would be a significant advantage. Experience in handling healthcare data is also desired. An educational qualification of Bachelors/Masters in computer science/Data Science or related subjects from a reputable institution is required. With a typical experience of 7-9 years in the industry, the candidate should have a strong background in developing data science models and solutions. The ability to quickly adapt to new programming languages, technologies, and frameworks is essential. A deep understanding of data structures and algorithms is necessary. The candidate should also have a proven track record of implementing end-to-end data science modeling projects and providing guidance and thought leadership to the team. Experience in a consulting environment with a hands-on attitude is preferred. As a Data Science Lead, the primary responsibility will be to lead a team of analysts, data scientists, and engineers to deliver end-to-end solutions for pharmaceutical clients. The candidate is expected to participate in client proposal discussions with senior stakeholders and provide technical thought leadership. Expertise in all phases of model development, including exploratory data analysis, hypothesis testing, feature creation, dimension reduction, model training, selection, validation, and deployment, is required. A deep understanding of statistical and machine learning methods such as logistic regression, SVM, decision tree, random forest, neural network, and regression is essential. Mathematical knowledge of correlation/causation, classification, recommenders, probability, stochastic processes, NLP, and their practical implementation to solve business problems is necessary. The candidate should also be able to implement ML models in an optimized and sustainable framework and gain business understanding in the healthcare domain to develop relevant analytics use cases. In terms of technical skills, the candidate should have expert-level proficiency in programming languages like Python/SQL, along with working knowledge of relational SQL and NoSQL databases such as Postgres and Redshift. Extensive knowledge of predictive and machine learning models, NLP techniques, deep learning, and unsupervised learning is required. Familiarity with data structures, pre-processing, feature engineering, sampling techniques, and statistical analysis is important. Exposure to open-source tools, cloud platforms like AWS and Azure, and AI tools like LLM models and visualization tools like Tableau and PowerBI is preferred. If you do not meet every job requirement, the company encourages candidates to apply anyway, as they are dedicated to building a diverse, inclusive, and authentic workplace. Your excitement for the role and potential fit may make you the right candidate for this position or others within the company.,

Posted 6 days ago

Apply

Exploring Athena Jobs in India

India's job market for athena professionals is thriving, with numerous opportunities available for individuals skilled in this area. From entry-level positions to senior roles, companies across various industries are actively seeking talent with expertise in athena to drive their businesses forward.

Top Hiring Locations in India

  1. Bangalore
  2. Pune
  3. Hyderabad
  4. Mumbai
  5. Chennai

Average Salary Range

The average salary range for athena professionals in India varies based on experience and expertise. Entry-level positions can expect to earn around INR 4-7 lakhs per annum, while experienced professionals can command salaries ranging from INR 10-20 lakhs per annum.

Career Path

In the field of athena, a typical career progression may include roles such as Junior Developer, Developer, Senior Developer, Tech Lead, and eventually reaching positions like Architect or Manager. Continuous learning and upskilling are essential to advance in this field.

Related Skills

Apart from proficiency in athena, professionals in this field are often expected to have skills such as SQL, data analysis, data visualization, AWS, and Python. Strong problem-solving abilities and attention to detail are also highly valued in athena roles.

Interview Questions

  • What is Amazon Athena and how does it differ from traditional databases? (medium)
  • Can you explain how partitioning works in Athena? (advanced)
  • How do you optimize queries in Athena for better performance? (medium)
  • What are the best practices for managing data in Athena? (basic)
  • Have you worked with complex joins in Athena? Can you provide an example? (medium)
  • What is the difference between Amazon Redshift and Amazon Athena? (advanced)
  • How do you handle errors and exceptions in Athena queries? (medium)
  • Have you used User Defined Functions (UDFs) in Athena? If yes, explain a scenario where you implemented them. (advanced)
  • How do you schedule queries in Athena for automated execution? (medium)
  • Can you explain the different data types supported by Athena? (basic)
  • What security measures do you implement to protect sensitive data in Athena? (medium)
  • Have you worked with nested data structures in Athena? If yes, share your experience. (advanced)
  • How do you troubleshoot performance issues in Athena queries? (medium)
  • What is the significance of query caching in Athena and how does it work? (medium)
  • Can you explain the concept of query federation in Athena? (advanced)
  • How do you handle large datasets in Athena efficiently? (medium)
  • Have you integrated Athena with other AWS services? If yes, describe the integration process. (advanced)
  • How do you monitor query performance in Athena? (medium)
  • What are the limitations of Amazon Athena? (basic)
  • Have you worked on cost optimization strategies for Athena queries? If yes, share your approach. (advanced)
  • How do you ensure data security and compliance in Athena? (medium)
  • Can you explain the difference between serverless and provisioned query execution in Athena? (medium)
  • How do you handle complex data transformation tasks in Athena? (medium)
  • Have you implemented data lake architecture using Athena? If yes, describe the process. (advanced)

Closing Remark

As you explore opportunities in the athena job market in India, remember to showcase your expertise, skills, and enthusiasm for the field during interviews. With the right preparation and confidence, you can land your dream job in this dynamic and rewarding industry. Good luck!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies