Le problème :
Après mon article "C'est galère d'installer Oracle XE sur windows 7 64 bits. Parfois ça marche, parfois ça marche pas." Je me suis décidé à utiliser une autre distribution non supportée pour installer Oracle.
Si, si...
Bon après CentOS c'est bien aussi (y'a pas fail2ban par défaut mais c'est bien quand même.).
Pour des bases de développement évidemment. Je ne veux pas voir ça ailleurs..
Solution :
On utilise Oracle EE disponible sur le site d'Oracle
- Creation user oracle
- Mot de passe oracle
- Mettre à jour le .bashrc comme on veut
- xhost + en root pour utiliser l'installateur graphique avec l'utilisateur oracle
- Les binaires se téléchargent ici : https://wiki.debian.org/DataBase/Oracle
- Attention, pas la distribution zlinux évidemment
- Les prérequis des librairies (et un certains nombres de tâches décrites ci-dessus) :
- https://github.com/antonioluna/oracle12c-debian/blob/master/Pre-install.sh
- c'est trop bien...
- Dans le .bashrc :
export ORACLE_HOME
export ORACLE_SID
- Installation de rlwrap si lors du lancement de sqlplus ca merde
apt-get install rlwrap
- Dans mon user oracle
cd ~/oracle-product/product/12.1.0/dbhome_1/dbs$
mv init.ora /home/oracle/oracle-product/product/12.1.0/dbhome_1/dbs/init<MABASE>.or
- Creation du repertoire de la base
mkdir /home/oracle/oradata/<MA BASE>
- create database
- https://docs.oracle.com/cd/B28359_01/server.111/b28310/create003.htm#ADMIN1107
- Editer le fichier /home/oracle/oracle-product/product/12.1.0/dbhome_1/dbs/init<MABASE>.ora
# Change '<ORACLE_BASE>' to point to the oracle base (the one you specify at
# install time)
db_name='MABASE'
memory_target=512M
processes = 150
audit_file_dest='/home/oracle/oracle-product/admin/orcl/adump'
audit_trail ='db'
db_block_size=8192
db_domain=''
db_recovery_file_dest='/home/oracle/oracle-product/fast_recovery_area'
db_recovery_file_dest_size=2G
diagnostic_dest='/home/oracle/oracle-product'
dispatchers='(PROTOCOL=TCP) (SERVICE=ORCLXDB)'
open_cursors=300
remote_login_passwordfile='EXCLUSIVE'
undo_tablespace='UNDOTBS1'
# You may want to ensure that control files are created on separate physical
# devices
control_files = (/home/oracle/oradata/mabase/ora_control1, /home/oracle/oradata/mabase/ora_control2)
compatible ='11.2.0'
- Recommencer si cela n'a pas fonctionné :
rm -rf /home/oracle/oradata/mabase/*
- Creation du user : sqlplus / as sysdba
STARTUP NOMOUNT;
CREATE DATABASE mybase
USER SYS IDENTIFIED BY lozenge
USER SYSTEM IDENTIFIED BY lozenge
LOGFILE GROUP 1 ('/home/oracle/oradata/mybase/redo01.log') SIZE 100M,
GROUP 2 ('/home/oracle/oradata/mybase/redo02.log') SIZE 100M,
GROUP 3 ('/home/oracle/oradata/mybase/redo03.log') SIZE 100M
MAXLOGFILES 5
MAXLOGMEMBERS 5
MAXLOGHISTORY 1
MAXDATAFILES 100
CHARACTER SET WE8MSWIN1252
NATIONAL CHARACTER SET AL16UTF16
EXTENT MANAGEMENT LOCAL
DATAFILE '/home/oracle/oradata/mybase/system01.dbf' SIZE 200M REUSE AUTOEXTEND ON MAXSIZE UNLIMITED
SYSAUX DATAFILE '/home/oracle/oradata/mybase/sysaux01.dbf' SIZE 200M REUSE AUTOEXTEND ON MAXSIZE UNLIMITED
DEFAULT TABLESPACE calv14
DATAFILE '/home/oracle/oradata/mybase/database.dbf'
SIZE 500M REUSE AUTOEXTEND ON MAXSIZE UNLIMITED
DEFAULT TEMPORARY TABLESPACE tempts1
TEMPFILE '/home/oracle/oradata/mybase/temp01.dbf'
SIZE 350M REUSE
UNDO TABLESPACE undotbs1
DATAFILE '/home/oracle/oradata/mybase/undotbs01.dbf'
SIZE 350M REUSE AUTOEXTEND ON MAXSIZE UNLIMITED;
@${ORACLE_HOME}/rdbms/admin/catalog (long)
@${ORACLE_HOME}/rdbms/admin/catproc <-- dba_data_files est créé ici (encore plus long)
- Recommencer si cela n'a pas fonctionné :
- drop user userBase cascade;
- Creation du user :
define SG_SCHEMA = 'userBase'
define DEFAULT_TBS = 'mytbs'
define TEMP_TBS = 'tempts1'
CREATE USER &SG_SCHEMA IDENTIFIED BY password
DEFAULT TABLESPACE &DEFAULT_TBS
TEMPORARY TABLESPACE &TEMP_TBS
;
grant connect to &SG_SCHEMA;
grant resource to &SG_SCHEMA;
GRANT UNLIMITED TABLESPACE TO &SG_SCHEMA;
GRANT CREATE SYNONYM TO &SG_SCHEMA ;
GRANT CREATE VIEW TO &SG_SCHEMA ;
Problem :
I try to create an aggregation (count and groupBy) on my documents.
Thanks to these links :
I nearly resolve my problem... nearly :)
Solution :
Documents to analyze :
- the _id of the application is defined by an id attribute...
- instance is not present is all documents
{
phase : "Production",
application {
_id : 1234
},
instance: "myInstance"
}
The first try is to use springdata easy way : there is not groupBy ! but count is working if you need :
# If instance is set to null, returns the document which do not have the field
countByApplicationIdAndPhaseAndInstanceId(Long applicationId, Phase phase, String instance)
# Another way to test instance presence
countByApplicationIdAndPhaseAndInstanceIdExists(Long applicationId, Phase phase, Boolean exists)
So, thanks to baeldung (many thanks) ! Here is a working solution, take care about :
- filtering order ! the Match operation works either on the document, either on the result of the group by ! (see Baeldung example for testing result of the group by)
#Filter on document
Aggregation aggregation = newAggregation(filterStates, agg);
#Filter on result of the aggregation
Aggregation aggregation = newAggregation(agg, filterStates);
- my applicationId as used in Springdata easy way must be written with Mongo ID : application._id
- and application_id could not be present in the group clause as it is filtered (but as it takes me 2 long hours to make it works....)
- Don't forget @Id on the result bean. I see somewhere that we can execute the generic command in Mongo and do not take care about the result bean.
- The @Id must be on the first item of the group
- Do not miss the collection name in the aggregation command (here MONGO_EVALUATION_COLLECTION_NAME)
@Service
public class EvaluationAdditionalRepository {
private static final Logger LOGGER = LoggerFactory.getLogger(EvaluationAdditionalRepository.class);
@Autowired
private MongoTemplate mongoTemplate;
public List<InstanceCount> getInstanceByApplicationAndPhase(Long applicationId) {
GroupOperation agg = group("application._id", "instanceId", "phase").count().as("countInstance");
MatchOperation filterStates = match(new Criteria("application._id").is(applicationId));
Aggregation aggregation = newAggregation(filterStates, agg);
AggregationResults<InstanceCount> result = mongoTemplate.aggregate(aggregation, MONGO_EVALUATION_COLLECTION_NAME, InstanceCount.class);
return result.getMappedResults();
}
}
And the result bean :
import org.springframework.data.annotation.Id;
public class InstanceCount {
@Id
private Long applicationId;
// Enum are authorized
private Phase phase;
private long countInstance;
private String instanceId;
public Phase getPhase() {
return phase;
}
public void setPhase(Phase phase) {
this.phase = phase;
}
public long getCountInstance() {
return countInstance;
}
public void setCountInstance(long countInstance) {
this.countInstance = countInstance;
}
public Long getApplicationId() {
return applicationId;
}
public void setApplicationId(Long applicationId) {
this.applicationId = applicationId;
}
public String getInstanceId() {
return instanceId;
}
public void setInstanceId(String instanceId) {
this.instanceId = instanceId;
}
@Override
public String toString() {
return "InstanceCount{" +
", applicationId='" + applicationId + '\'' +
", phase='" + phase + '\'' +
", instanceId='" + instanceId + '\'' +
", countInstance=" + countInstance +
'}';
}
}
Problem :
I was creating a simple cron job to connect to from remote-server-1 to remote-server-2.
Testing the job with direct call or run-parts was OK
# direct call to my script
/home/admin/myscript.sh
# or with run-parts
run-parts -v –-test /etc/cron.hourly
But when called from cron I had a : Permission denied (publickey).
Solution :
First, trying to reproduce in cron environment with this command line (extract from there)
I finally reproduce the problem.
So I add -vvv options to my ssh connection to get more details : still not enough clue : permission is refused.
Then I decided to compare my ssh connection from bash command line :
remote-server-1@myuser > ssh -vvv remote-server-2
What a surprise :
- it uses my personal key to connect to remote-server-2 instead of remote-server-1 key !
- my personal key is deployed on remote-server-1 and remote-server-2
So when I run the connection, it works because it uses my personal key but when ran from cron environment it uses remote-server-1 key and this one was not declared on remote-server-2.
SSH is able to use your connection key in priority to try to connect to another server...
Problem :
I was looking why my multipart header was not sent when suddendly, JMeter sends my POST HTTP request in a raw format.
Although I have in my GUI HTTP Request sampler a normal list of parameters : param1=value1, etc..., it sends
param1param2
Solution :
No solution on google, but it was "simple" : in my default HTTP Request, I changed the sheet "parameters" to "body data", even if the both was empty it was sufficient to invite chaos...
The first problem was that my Multipartform-data header was not sent : because a default one was set on default HTTP Header...
End of day....
Problem :
On Debian, do not update resolv.conf (DNS) when we have multiple DHCP network interfaces.
Solution :
A first link : Never update resolv.conf with DHCP client
But we don't want to never update, but sometimes update...
On Redhat families it's simple (see the previous link) : PEERDNS=NO on the right interfaces
On Debian families.... let's use the hook as suggested :
Create hook to avoid /etc/resolv.conf file update
You need to create /etc/dhcp3/dhclient-enter-hooks.d/nodnsupdate file under Debian / Ubuntu Linux:
# vi /etc/dhcp3/dhclient-enter-hooks.d/nodnsupdate
Append following code:
#!/bin/sh
make_resolv_conf()
{ : }
OK, but the hook prevent ALL interfaces to update resolv.conf, the idea :
- in the hook test the interface name
- if one authorized, call the original make_resolv_conf
- otherwise to nothing
In bash it's not easy to have multiple function with the same name, but thanks stackoverlow !:
#!/bin/bash
# copies function named $1 to name $2
copy_function() {
declare -F $1 > /dev/null || return 1
eval "$(echo "${2}()"; declare -f ${1} | tail -n +2)"
}
# Import the original make_resolv_conf
# Normally useless, hooks are called after make_resolv_conf declaration
# . /sbin/dhclient-script
copy_function make_resolv_conf orignal_make_resolv_conf
make_resolv_conf() {
if [ ${interface} = "auhtorizedInterface" ] ; then
original_make_resolv_conf
fi
}
Update :
The previous solution is not working... declare is not known by sh/dash and the script is run by sh/dash. So the copy function is not possible.
Ideas :
- copy make_resolv_conf in this file under original_make_resolv_conf : it works, but ugly due to security patch not handled
- use 2 hooks : one enter : save resolv.conf, one on exit : restore resolv.conf if ${interface} is not authorized
- try to extract make_resolv_conf from /sbin/dhclient-script : not so easy...
Best solution, the two hooks, it's a pity :) I like the copy_functions :) :
# vi /etc/dhcp3/dhclient-enter-hooks.d/selectdns-enter
#!/bin/sh
cp /etc/resolv.conf /tmp/resolv.conf.${interface}
# vi /etc/dhcp3/dhclient-exit-hooks.d/selectdns-exit
#/bin/sh
if [ ${interface} = "auhtorizedInterface" ] ; then
echo "${interface} not authorized"
cp /tmp/resolv.conf.${interface} /etc/resolv.conf
fi
Fil RSS des articles